Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
78 changes: 78 additions & 0 deletions network-rfc/11_tokenomics_v2.1.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
# Tokenomics 2.1

## Overview

Tokenomics 2.1 addresses the centralization issues of Tokenomics 2.0, namely:

- That there is a single (de-facto centralized) pool for providing SQD for yield, which makes SQD a security
- The lack of dynamic pricing and many parameters that has to be hard-coded/adjusted in a centralized (or, at best, DAO-like) fashion. The subscription fee is not established by an open market and the marketplace.
- Removing the treasury-initiated buyback-and-burn mechanics which makes SQD a security
- Moving the reward token out of scope
- Introducing the fee switch to the Portals (to be activated in the future if necessary)
- Making it possible to register Portals on EVMs (in particular, Base) and Solana. For Solana users, it opens up the posibility to pay in USDC or SOL.


## The SQD Flows

**Workers**
Workers serve the data and receive rewards in SQD. The reward depend on the number of served queries, the amount of delegated tokens and the uptime of the worker. The worker has to lock 100k SQD to participate in the network. The maximal amount of rewards distributed per worker and the delegators is controlled by a single parameter called `TARGET_APR`. The reward is then split abetween the worker and the delegators for the worker.

**Delegators**
Delegators delegate SQD to workers to get a part of the worker reward. Both the amount of delegated tokens and the served queries affect the reward per query served.

**Data consumers**
Data consumers query the network p2p using a Portal. The maximal bandwidth that can be consumed by a Portal is determined by the amount of SQD locked in the Portal contract. Thus, the data consumer either buys SQD on the open market and locks the desired amount themselves, or makes an _SQD Provision Offer_ to SQD holders willing to provide SQD in return to the fee.

An SQD Provision Offer is an agreement to lock SQD for a specified amount of time for a fee, paid as continuosly during the whole lock period. The fee is locked by the consumer in advance and can be paid in any of the supported tokens. Special conditions apply for extending the provision offer and withdrwals.

**SQD providers**
SQD providers hold SQD and fullfill the matching _SQD Provision Offers_. Active providers can advertise their target fees in advance to make the market and set the expectations for the data consumers.

## Emission reduction

The `TARGET_APR` currently set to 25% yearly, will be gradually reduced and replaced by the fees collected from the portals.

## Portal Payments

The are two options to get data through the portals:
- Lock SQD tokens (the existing flows)
- Pay a subscription fee in one of the supported tokens, so that the SQD is locked by one or multiple SQD providers.

**The subscription flow**

The user specifies:
- the required bandwidth (which translates into the required SQD to be locked)
- the terms (fix-term or auto-extension)
- the price (the dApp will provide the current quotes of the SQD providers to give a reasonable offer)

The user:
- creates a Portal Registration Contract
- makes the fee deposits
- the willing SQD providers lock the SQD for the required term

The fee is deducted every epoch and automatically split:
- 50% to the SQD providers
- 45% to the worker reward pool
- 5% gets burnt

The fee parameters are adjustable.

![image](https://gist.github.com/user-attachments/assets/9d3977ee-aa80-4a6f-a248-83656abc10e1)


## The fee switch

Apart from the fees collected from the subscription fees, Tokenomics 2.1 introduces a fee switch directly to the portals.
The fee switch is initiall set to zero, but can be switched on at a later time.

When it is on, the fee is deduced from SQD locked in every Portal contract, and distributed between burn and the reward pool.
That way even the users self-staking SQD will pay a usage tax. For SQD providers the tax may be compensated directly by the fee.


## Deployments to Solana and other networks

The Poral Registration factories allow deployments on foreign networks, such as Base and Solana, assuming the two-way bridging is possible. In order to integrate a foreign chain one would need

- Update the Worker nodes to listen to the registration events
- Establish a canonical "bridged" token on the target chain with minimal liqidity pools
- Implement bridging of the fees to be teleported to the host chain regularly, to top up the reward pool on the host chain
74 changes: 74 additions & 0 deletions network-rfc/12_contracts_design.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# Portal System


### Portal Pool Creation and SQD Collection

The Portal System begins when a Portal Operator creates a new portal pool through the PortalFactory by specifying key parameters including the maximum capacity (amount of SQD tokens), deposit deadline, payment token, and "budget".

The portal operator allocation (contribution by the deployer) will be determined by the maximum capacity that the portal operator is seeking.

The maximum capacity parameter defines the total amount of SQD that can be staked into the portal pool. This can be configured by the portal operator during creation and can be increased later if needed. The contract requires this amount to be higher than the minimum stake threshold (set by protocol governance/gateway registery contract) for portal registration.

The factory deploys a single PortalPool contract, an upgradeable instance that combines both the core distribution logic and SQD vault functionality into one unified contract.
Once deployed, SQD token providers can stake their tokens directly into the PortalPool by calling the stake function with the portal pool address and desired amount.

During this collection phase, the portal pool remains in a "Collecting" state where it accumulates SQD deposits from multiple providers until either the maximum capacity is reached or the deposit deadline passes.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to have a soft cap, so that when the deposits surpass the minimum stake amount, the portal can be activated immediately while still seeking SQD providers?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That may actually be a good idea 🤔
We would then only have 3 states, right? Inactive — lower than the minimum required SQD, Partial — able to run but open for more SQD, and Full — no more capacity for SQD providers

Let's put it in the doc and discuss with everyone else on a meeting

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

10.Nov Call:
Instead of introducing three separate states, we’ll rely on the flexibility of the new GatewayRegistry contract. Each stake is passed directly to the GatewayRegistry, and once the total stake is above MINIMUM (100K), we can deterministically calculate the CUs, since CUs are exposed via a view function

If sufficient SQD is collected before the deadline (meeting the minimum threshold for portal registration), the portal operator can trigger the activate function to transition the portal pool to its active distribution phase.

However, if the deadline expires without reaching the minimum threshold required for portal registration, the portal pool is marked as failed, triggering a full refund of both the operator's budget and all staked SQD tokens back to their respective owners.

### Active Distribution and Fee Routing
Once activated, the portal pool enters its Active state where it begins distributing

Throughout this active period, the portal operator can call the distribute function to inject tokens into the contract, which will be distributed across SQD providers etc.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DZ proposed to have a "gradual" distribution instead of direct payments to SQD providers. Something similar to how the worker rewards work now. How I see it could work:

  • For portal operators the workflow stays almost the same — they will top up the contract once in a long period, e.g. month. However, at the pool creation, they also specify the expected earnings in USDC/day and can modify it later. The top-up amount stays in the contract instead of being immediately distributed.

  • SQD providers can see their expected share of USDC/day of the given pool when locking SQD (also converted to APY by the current SQD price as a visual hint). In the UI they will be able to see their "current balance" of unclaimed USDC and can claim it to their wallet by issuing a transaction.

This actually achieves multiple goals:

  • Better UX for delegators because they start earning every minute from the moment they join
  • Better UX for operators because they clearly understand how much to pay and can compare their offer to other pools
  • Portal operators can pre-fund their pool so that delegators can be more confident in future earnings
  • Receiver pays for gas for ERC20 transfers

The main problem is what happens when the balance on the token gets down to 0.

  • One option is to allow "negative balances" to be topped up in the future, while continuing to "promise" stable earnings to delegators. In this case someone may end up with visible balance that they can't actually claim.
  • Another option is to stop topping up the balances at the moment when the contract can't back every "promised" dollar with the current balance, and start recalculating the actual APY accordingly. This may be much harder to implement but sounds more fair for everyone.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO let's keep it honest & dynamic.
The "negative balance" approach is risky -> imagine a provider sees "$100
earned" but when they try to claim, they get $0. This could kill the trust.

Worst case: operator doesn't top up and we end up showing huge negative
numbers that nobody can actually withdraw :/

I'd go with dynamic rate adjustment, so when the pool runs low, the rate
adjusts to reflect reality.
No fake promises. If operator doesn't fund it, the rate just drops to 0 until they top up again.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How I see this could work in the contract

  • The portal operator sets the distribution rate $R$ USDC/block and can change it later at any moment.
  • The portal operator pre-funds the contract and can later top it up at any frequency, prolonging the runway. He/she can see the contract balance at the current block and should make sure that it never reaches 0.
  • The fee $f \in [0, 1]$ is set in the contract to be collected to the treasury, leaving only $R \cdot (1 - f)$ to be distributed to the delegators.
  • If someone owns $w$ of the total stake ($w \in [0, 1]$), he gets $w \cdot R \cdot (1 - f)$ USDC "added" to his balance each block. They can see the withdrawable balance at the current block and the rate at which it changes.

Then we just need to make sure that at any moment the balance is withdrawable, meaning that the sum of balances of all delegators doesn't exceed the amount of USDC sitting in the contract.
If the portal operator always tops up the contract at least at the rate of $R$, it's already achieved. So what happens if they fail to top up in time?

  1. The earning rate displayed to the delegators immediately becomes 0, and the withdrawable balance stays at the current number. I think we agreed on that in the previous comment — no fake promises.
  2. We can still display something like the average earning rate — the total amount of top-ups in the history of the pool divided by the duration it exists for. It will slowly start to get lower every block.
  3. When the operator decides to top up again, I can see two options for what should happen:
    1. All the balances are recalculated, and we continue running as before, starting from the current block. The average earnings then get lower than what was promised.
    2. We can force the operator to pay the debt and fail the transaction if they don't add enough funds. So the delegators won't be able to claim more rewards while the balance is zero, but later they will claim the full amount without even noticing that the wallet was empty at some point.
  4. Until it's topped up, the delegators only have an option to keep waiting or unstake according to the usual rules.

3.ii is a much better experience for the delegators with the only risk that if the operator starts earning less and forgets about the balance running out, it may be hard for them to ever recover from that state.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally, we should let the operator decide whether to pay the debt at the time of topping up or not. And then provide a good analytics page to view the history

Copy link
Contributor

@kalabukdima kalabukdima Dec 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After discussion with EF we came to the conclusion that it should be enough to implement option 3ii. Forcing the operator to pay the debt covers the most important case — when the operator just forgot to top up. It also enables a much simpler APR calculation.
If the operator doesn't want to pay the debt, they have an option to close the pool, unlocking SQD.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's how I see the implementation

Smart contracts

At the portal creation, the operator defines the distribution rate per second, which gets split into the worker/burning fees and distributed delegator rewards per second. This rate is split among all the delegators depending on their share (rewards_rate * locked_tokens / pool_capacity). Later the operator can change the distribution rate.

Once the contract is topped up, it recalculates the current "operator balance" and saves a checkpoint — the new balance and current timestamp. This information is enough to return the current balance in a read method at any point later (last_balance + time_delta * rate). An event corresponding to the checkpoint is also emitted to allow indexers to replicate this behaviour and plot the balance change over time.

Such recalculation also has to be done on every change in the total staked amount, because the difference between the total stake and the capacity should stay on the pool's balance.

At every rate change and pool capacity change, the rewards rate changes for all the delegators. At these points the contract recalculates the new rate for each of them, stores checkpoints in the same way, and emits the corresponding events. Knowing the last checkpoint and the distribution rate allows you to calculate the claimable balance at any moment in the future. This calculation should also consider the runway of the current operator balance. The formula becomes something like last_balance + min(time_delta, runway) * total_reward_rate * stake / total_capacity.

Every time the delegator claims the rewards, the checkpoint is also created, but only for this delegator.

UI

After activation, the pool is always in one of two states:

  • active — distributing exactly the specified rate, or
  • out of money — the operator forgot to top up the pool, so it's not known whether he will top it up later or never top it up again.

In the first case, the UI can get the current (at the moment T) balance and the distribution rate, and start showing a constantly increasing number.
In the second case, it should just show the current balance (it's not increasing anymore), and warn the user that the pool is out of money until it's topped up again.

After the pool is topped up, the historical APR gets back to normal without any slumps because the missing funds were compensated.

image

This amount is distributed based on the FeeRouterModule, a separate admin-controlled contract responsible for splitting the fees according to configurable basis point allocations (configurable k% goes to the treasury and the rest goes to SQD providers).

The FeeRouterModule holds the actual BPS.
During both the staking and distribution phases, the system can trigger external Hooks at key moments (before and after staking, distribution, and exits), allowing for customized behavior such as additional protocol token rewards layered on top of base distributions etc.
Similar to UniswapV4 Hooks.


The portal scales down its capacity as SQD is withdrawn but continues operating until the minimum threshold is breached.


**Two-Step Withdrawal Process:**

1. **Portal Pool Exit Delay**: Exits are subject to a time-delay mechanism designed to prevent sudden liquidity shocks. The exit delay consists of a base period of 1 epoch plus a percentual delay calculated by the amount being withdrawn. The system allows a maximum of 1% of the total portal pool liquidity to exit per epoch, meaning if a provider wants to exit 5% of the liquidity, they must wait 1 epoch (base) plus 5 additional epochs (one epoch per 1% of liquidity), totaling 6 epochs before their full withdrawal is processed. Providers can withdraw unlocked portions incrementally (1% per epoch) rather than waiting for the full delay period to complete.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To protect against the "whales" who have many locked tokens, we should use absolute values for allowed unlock speed.

With the described mechanism, everyone will have to wait the same duration to unlock 100% of their stake — no matter if you own the entire pool or just 1 SQD.
Instead it should be something like "everyone can unlock a limited amount per block" making it easy to unlock 100 SQD but making you wait (and probably submit multiple transactions) to unlock half of the pool.

Copy link

@Gradonsky Gradonsky Dec 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following our call, there is a problem regarding the withdrawal mechanisms.
Both current ideas have open issues:
Percentage-based unlock (e.g. 1% per epoch)

  • Pros: scales with pool size, naturally slows down large withdrawals.
  • Cons: can be bypassed by splitting stake across many wallets / LST positions (e.g. 500 × 0.5% = same as a big whale exit).

Absolute unlock rate

  • Pros: fair per-unit rate for everyone, independent of pool share, whales can’t drain everything in a single tx
  • Cons: same fragmentation problem (many wallets)

One idea discussed on the call was to combine a fixed base delay (e.g. N blocks) with an absolute per-block unlock cap, so even if exits are split across multiple wallets, everyone still waits at least the base period.

As mentioned both are still vulnerable to splitting LPTs (Liquid Portal Tokens) across many wallets (same bypass issue)

We need to decide whether to adopt:

  • absolute cap
  • percentage cap
  • or base delay + cap hybrid
  • ideas?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My initial proposal

There is a global limit enforced by the underlying registry contract — $T$ SQD can be unlocked per block, let's say 100. Now suppose someone owns $w$ of the total stake ($w \in [0, 1]$). Then I suggest this delegator can unstake $w \cdot T$ SQD per block. This way it doesn't matter whether you split your delegation into multiple piles or not — every SQD locked in the contract gets unlocked at the same rate, making it 100 times faster to unlock 1000 SQD than it takes to unlock 100,000 SQD.

Also, there is a limit on how much can be unlocked in a single transaction — to guarantee that the active SQD balance doesn't immediately get too low, making the portal unusable. And you're right, @Gradonsky, this limit can be avoided by sending your stake to multiple contracts.

I can see a couple of options if we want to solve this problem.

Option 1. Calculate the rate depending on the current withdrawal queue

Instead of using the share of the total stake $w$, only calculate how much every currently withdrawing delegator requests and split the rate $T$ proportionally between them. So that if only one delegator is withdrawing, he gets the maximum rate, $T$. If one is withdrawing 100k SQD and another requested a withdrawal of 200k SQD, starting from that transaction, the first gets $\frac{1}{3} T$ SQD unlocked per block, and the second gets $\frac{2}{3} T$ SQD unlocked per block.

This way it doesn't make sense to split the stake between multiple wallets — you'll still get the same total withdrawal rate. But you can still destabilize the portal by doing that.

Additionally, we can limit the number of concurrent active withdrawals, making the next request wait until someone else's withdrawal request gets fulfilled.

Option 2. Only allow one withdrawal at a time.

A much simpler approach in terms of implementation, but maybe not fair enough. If we only allow one active (limited) withdrawal, making everyone else wait until it's completed, then it can be fulfilled at the maximum rate of $T$. After that, either the same person or anyone else submits their withdrawal request and waits until it's completed. Additionally, we can disable partial withdrawals, only allowing the claim of the whole requested amount after the unlock period.

Together with the ability to transfer/sell your position on the market and the ability to request immediate withdrawal by paying the fee, I think this is not a bad option.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In Option 1.
What happens if there is a whale who wants to withdraw 100K SQD, and an "attacker" who splits 10K SQD into 1,000 wallets with 10 SQD each and requests withdrawals on all of them?
Does this dilute the whale so they receive only a small fraction of the withdrawal throughput, while the attacker’s many small wallets take most of the bandwidth?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In option 1, which is still based on relative stake being withdrawn, the whale will get 10/11-th of the rate, and the attacker will get 1/11-th in total (1/11000 per wallet).

If we limit the number of concurrent active withdrawals, then yes, the whale will have to wait until those 10k SQD of an attacker become fully unlocked. However, it will take the same amount of time to withdraw 10k SQD at once as it would take to wait for 1000 withdrawals with 10 SQD each. So for the attacker, it doesn't make much sense to split it past the concurrency limit. And they need to own quite a big share to actually mess up with other delegators, which is a reasonable limitation, I think.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought the limit of the queue size could be something like 10. How much gas would it cost then approx? Also, only those who enter later will have an increased gas usage, right?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm. what do you think about Cumulative Conveyor Belt (Global Leaky Bucket) model.

This system tracks a single global totalRequested counter and processes exits at a fixed globalUnlockRate (e.g., 100 SQD/block), effectively placing every request on a continuous timeline. This renders Sybil attacks useless because the time cost is determined solely by the total volume ahead of you, not the number of participants: a Whale requesting 100k SQD occupies the same "length" on the belt as an Attacker splitting 100k SQD into 1,000 wallets, meaning the last split transaction finishes at the exact same block as the single large transaction would have.

Queue:

  [====User A (1000 SQD)====][====User B (5000SQD)====][==User C (500 SQD)==]
          ←── Belt moves at fixed rate (1000 SQD/block) ──→
  • Each exit request gets a "ticket" with a position on the
    belt
  • The belt moves at constant speed regardless of who's in
    line
  • You can claim when your position has been "processed"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, right, this is similar to option 2, but instead of rejecting your request, you get to the end of the queue, knowing in advance how long you'll have to wait. I like it!
The only catch is we'll have to limit the total length of the belt so that the portal doesn't get a quick drop in CUs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dzhelezov do you think it will be too restrictive if we queue up everyone who wants to withdraw their stake? Usually everyone will request at different times without even noticing this mechanism. But if there is a massive run, the first-to-request – first-out rule starts to work. It simplifies implementation significantly as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes I think this "conveyor belt" is the right approach -- and afaik smth similar is implemented for the etereum validator withdrawal queue.


2. **Liquid Unstaking from Registration**: Once the exit delay period completes and the withdrawal is processed by the portal pool contract, the Portal Registration Contract handles the actual unstaking. Since the registration contract supports liquid staking, the unstaking happens immediately without additional lock periods. The compute units allocated to the portal are reduced proportionally as SQD is unstaked.

For example, if a provider holds 10% of the portal pool's total SQD and wants to exit their entire position, they would need to wait 1 base epoch + 10 epochs (for the 10% withdrawal) = 11 epochs total in the portal pool.

**Importantly, once a provider requests an exit, they stop earning rewards on the requested exit amount during the entire waiting period**

New SQD providers can enter the portal pool at any time, including when existing providers have requested exits. This allows for seamless replacement and maintains liquidity continuity in the pool. When new providers stake, the portal registration contract immediately increases the allocated compute units proportionally.

Throughout this entire process, the system maintains upgradeability through the proxy pattern (allowing the factory admin to deploy improved implementations without affecting existing portal pools), adjustable fee distributions (admins can modify the FeeRouterModule configuration to change allocation percentages), and emergency controls (pausing functionality at both the factory and individual portal pool levels for security purposes).

---

## State Transitions

```
Collecting → Active → Closed
Failed
```

- **Collecting**: Portal pool accepting SQD deposits, waiting to reach minimum threshold before deadline
- **Active**: Minimum threshold met, portal registered and distributing tokens when injected. Portal continues operating as long as staked amount remains above minimum threshold, with compute units scaling proportionally.
- **Failed**: Deadline passed without reaching minimum threshold, full refunds enabled
- **Closed**: Portal pool closed?


### To Discuss
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we also talked about support for transferring the stake. Something to add on the next iteration probably, but maybe worth adding to the design doc


1. **Liquid Stake Tokens (LST) vs NFTs**:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Decision needs to be made here:
ERC-20 (Separate token per portal)
Upsides:

  • Much simpler to understand for users
  • Basically every DeFi protocol supports ERC20 out of the box
  • Easy wallet support (standard token flow)
  • Can trade instantly on Uniswap/DEXs, super straightforward (after pool creation)
  • Each portal = its own token + symbol (example: LPT-PORTAL1)

Downsides:

  • If we have 100+ portals → that’s 100+ token contracts deployed
  • I’m not bringing up gas because on Arbitrum it barely matters
  • Harder to track all user positions in a single clean view
  • No unified contract that holds everything together
  • User must manually add each token to the wallet

ERC-1155 (One contract, many token IDs)
Upsides:

  • One contract to rule all portal positions
  • Single deployment, much cleaner on the infra side
  • Batch transfers possible (exit multiple portals in one tx)
  • Super easy portfolio querying (one contract = full overview)
  • Native metadata URI support

Downsides:

  • We lose most DeFi compatibility
  • Harder to trade on secondary markets
  • Wallets show it differently (NFT-style vs token-style)
  • More complex approval flow (setApprovalForAll)

IMO: we should stick with ERC20 as it simply unlocks way more value through DeFi integrations.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure creating a pool per token on DEXes can deliver a good UX — the liquidity there will mostly stay at zero I think.

I hope DeFi infrastructure will eventually mature to support trading ERC-1155 tokens. Until then, it's not really clear how we can provide a good experience.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed with DZ and EF that it should be separate ERC20 tokens to make use of better infrastructure. We don't really care about liquidity and being able to sell that token. There are already enough tools in the ecosystem to work with tokens that are not liquid

- SQD provider positions could be represented as Liquid Stake Tokens (fungible) or NFTs (non-fungible)
- Issue: LSTs would be tied to each portal pool (potentially 100+ different tokens if many portals exist)


Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

10.Nov Call:

Contract: PortalPool & System Architecture
Type: Design Decision

Staking Limits per Portal:

Question: Should we enforce a maximum maxCapacity (hard cap) for each PortalPool, or should we allow unlimited staking and let APY dilution act as a natural cap?

IMO. this approach is simpler and trusts our economic model to balance the ecosystem.
It encourages genuine "competition" among operators rather than just forcing capital fragmentation.
The "optimal" stake (e.g. 1M SQD) can be a strong recommendation on the UI, not a rigid on-chain rule.

Question is:
What are the potential drawback or potential risks of not having a hard cap

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We definitely want the pool operator to set a cap — simply because otherwise it will be hard to control the APY. Even in our first deployment we likely start with say 1M cap then quickly fill it and extend to 10M, but not more.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

10.Nov Call:

Contract: new GatewayRegistery
Type: Feature Proposal
TWAS (Time-Weighted Average Stake):
a metric normally used in staking and reward distribution systems to measure how much and for how long a user has staked their tokens.

In our case, TWAS can be used to introduce a boost factor for CUs that rewards long-running portals with higher stakes.

Example:
Maintain TWAS > 500K SQD for 30 days -> 1.05x CU boost
Maintain TWAS > 1M SQD for 60 days -> 1.10x CU boost

Implementation:
We would use here a Cumulative Value pattern, pioneered by Uniswap V2 for its Time-Weighted Average Price (TWAP).
Instead of storing a history of stakes, we store a single, ever-increasing number: stakeCumulative.
This variable represents the integral of the stake amount over time (measured in blocks).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My proposal is to remove boosting at all. It will probably be compensated by the mechanics of the portal pools anyway

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How will the distribution work then? Will everyone get not $S_i/\sum(S_j)$ of distributed USDC but something like $S_i B_i / \sum(S_j B_j)$?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, exactly.
We are aiming for a Weighted Stake model to incentivize long-term liquidity.
Bi is the boost factor derived from the lock duration (TWAS).
This implies we track totalWeightedStake (Effective Stake) in the contract rather than just raw totalStake. This adds state management complexity (recalculating weights on modification/expiry) when user interacts (lazy state update)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed with DZ that

  • It's better to have pre-defined lock time for the boosts because we can then measure how much of the supply is locked in the contracts.
  • To avoid sharding the pool by each token's "duration", we can implement it on top of the simple solution. If you get an ERC20 token for locking SQD, you can then lock that token in another contract for the specified duration to get some reward for it. And we can agree on the particular reward mechanism later to keep it out of scope of the current implementation


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

28.Nov Call:

  • Let's limit the maximum stake from a single wallet to discourage a single party from owning too big share.

  • We probably still want to have "boosts" of returns if locked SQD for a long time.

  • It would be great to make your position in the pool transferable, ideally with ERC-1155 tokens. Then the delegators may start selling their positions to exit immediately without affecting the pool.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important thing to not forget about: the rewards rate for a single delegator shouldn't depend on the total staked amount. If the pool is not full, we should only pay out the share proportional to the total capacity, not to the staked amount.

### Portal Registration Contract (V2)

A new simplified Portal Registration Contract will be created to replace the current GatewayRegistry for portal pool operations. This contract is designed specifically for the portal pool system.