banner
0xjokereven

0xjokereven

All is for consistent
twitter
github
pixiv
jike
medium

Hello World of Fluence, how does Fluence run

Operating Principle
Protocol
The Fluence protocol builds a network of computing resources and defines a cloudless stack for executing computational tasks.

Fluence Network Overview

Cloud-agnostic functions, Aqua
The Fluence network operates entirely on the Aqua protocol, providing decentralized and secure distributed execution. Aqua is also a domain-specific scripting language compiled into π-calculus operations, allowing for the expression of deadlock-free distributed algorithms. Aqua does not require deployment: Aqua scripts are packaged as data packets and executed as the script-written program propagates through the network. Additionally, Aqua ensures the cryptographic security of script execution: all requests and their step-by-step execution are signed by participating nodes, making the executed scripts auditable and verifiable.

Code written in Aqua is referred to as cloud-agnostic functions because it runs across clouds, servers, and regions. Cloud-agnostic functions drive client applications and the Fluence protocol itself. Service discovery, routing, load balancing, subnet bootstrapping and operations, scaling, and fault-tolerant algorithms all run on Aqua and manifest as cloud-agnostic functions.

Computational functions, Marine
While Aqua manages the execution topology and flow between servers and the cloud, Fluence uses Marine to run computational tasks on nodes. Marine is a WebAssembly runtime that allows for the execution of multiple modules, combining interface types and WASI for effector access.

Marine powers computational functions, which are executed within a single machine, similar to serverless cloud functions. Marine supports Rust and C++ as source languages and will support more languages as the WebAssembly standard evolves.

Subnets
All computational functions in the network are deployed through replication to ensure fault tolerance: if a node becomes unavailable due to hardware failure or network interruption. The replicated deployment is called a subnet. Subnets are fully customized by developers through cloud-agnostic functions, allowing developers to enable failover, load balancing, consensus, or any other custom algorithm.

Subnets create highly available data storage on Fluence. It can be used for hot caching or data indexing, while storage for large amounts of data is outsourced to external storage networks, whether centralized (such as S3) or decentralized (such as Filecoin/Arweave).

Computational Proof
The Fluence protocol enforces cryptographic proofs for all executions in the network. Providers generate proofs for their computations, and clients only need to pay for verified and correct work with accompanying proofs.

Aqua Security
As nodes operate in Fluence, they continuously serve incoming cloud-agnostic function calls that require further forwarding of functions or execution of computational functions on that node. For each incoming cloud-agnostic function, the node verifies the execution trace involving the node before ensuring the correct execution of this process. Cloud-agnostic functions executed by erroneous nodes or disrupted topologies are discarded.

If a cloud-agnostic function requires a node to run a computational function, the node completes the work, extends the execution trace, and further forwards the request as requested. In this way, the protocol ensures correct operations and generates audit records for all executions.

Processing Proof
The protocol enforces probabilistic verification of Aqua executions on-chain. Since all computations are encapsulated in Aqua, this means that all operations performed on the platform are probabilistically verified on-chain. Providers must submit the required executions to receive rewards; otherwise, they will be reduced.

Execution Proof
Cryptographic proofs for specific code executions on Fluence. Each function execution is accompanied by such proof generated by Fluence's Marine WebAssembly runtime. These proofs are verified by nodes participating in subsequent executions and included in processing proofs.

Marketplace
The on-chain marketplace matches compute providers with customers who pay for the use of compute resources. The marketplace is fully permissionless, allowing any provider to participate and advertise their available capacity.

The marketplace is hosted on Fluence's own chain, driven by IPC, validated by Fluence compute providers, and anchored to Filecoin L1. The native chain enables cheap and fast transactions to rent compute resources at any scale: from small workloads to large-scale ones.

Compute Providers on the Fluence Marketplace
Compute Providers on the Fluence Marketplace

Compute Provider Resources
Compute Provider Resources

Capacity Proof
Fluence ensures the existence and availability of resources advertised by providers by enforcing a cryptographic proof called capacity proof. Providers apply their hardware resources to continuously generate capacity proofs to confirm that these resources are ready to serve customers. The protocol rewards these resources with Fluence tokens based on the allocated computing power.

Whenever a customer needs the computing power of a selected provider, these resources switch from generating proofs to serving the customer's application.

Compute Units Submitting Capacity Proofs
Compute Units Submitting Capacity Proofs

Resource Pricing
Customers can choose providers based on advertised prices and other parameters or post jobs and the maximum price they are willing to pay for any provider to pick up.

Behind the scenes, for each application deployment, a transaction is created on-chain between a customer and a set of providers. The transaction records financial details (price, prepayment, and required collateral from the provider), technical requirements related to the desired service (access to certain data, binary files, or web APIs), and links to code installations in the network.

Matching Criteria for Provider Selection
Matching Criteria for Provider Selection

Billing Model
Initially, the billing model is prepaid, based on the leasing time of resources, calculated in epochs. The minimum resource lease is one compute unit, lasting for 2 epochs, where one epoch is defined as 1 day, and one compute unit is 1 core, 4GB of memory, and 5GB of virtual disk space. Customers need to make a prepayment for the minimum period to ensure providers receive compensation for their work.

Request-based billing and elastic compute units will be introduced in the next phase of the project.

Network
The Fluence network can be seen as a set of globally interconnected nodes, each running AquaVM and Marine, capable of receiving commands to deploy and execute code locally and collaborating with other nodes based on received cloud-agnostic functions. These nodes are mostly actively involved in economic activities: they monitor and enter into transactions, form new subnets or adjust subnet participation, install applications as requested by transactions, coordinate executions, and handle incoming requests. They also generate execution proofs and submit these proofs for rewards based on the proof algorithm.

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.