Skip to content

Conversation

@dzhelezov
Copy link
Contributor

Tokenomics 2.1 addresses the centralization issues of Tokenomics 2.0, namely:

  • That there is a single (de-facto centralized) pool for providing SQD for yield, which makes SQD a security
  • The lack of dynamic pricing and many parameters that has to be hard-coded/adjusted in a centralized (or, at best, DAO-like) fashion. The subscription fee is not established by an open market and the marketplace.
  • Removing the treasury-initiated buyback-and-burn mechanics which makes SQD a security
  • Moving the reward token out of scope
  • Introducing the fee switch to the Portals (to be activated in the future if necessary)
  • Making it possible to register Portals on EVMs (in particular, Base) and Solana. For Solana users, it opens up the posibility to pay in USDC or SOL.

@kalabukdima
Copy link
Contributor

I like this version much more than the 2.0!


### Reward Claims, Exits, and Closure

While the portal is active, SQD providers can claim their proportional share of accumulated rewards at any time by calling the claimRewards function on the PortalProxy, which calculates their share based on their staked balance relative to the total tokens in the portal and transfers the corresponding tokens to them. The portal continues distributing as long as the data consumer injects tokens through the distribute function, with all distributions based on the FeeRouterModule configured splits.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the claimRewards function calculate their share? I think you won't be able to account for past clamis this way, so you have to top up claimable amount at the distribution time. The same is already done in our WorkerRegistration contract

During this collection phase, the portal remains in a "Collecting" state where it accumulates SQD deposits from multiple providers until either the target amount is reached or the deposit deadline passes.
If the target is met before the deadline, the data consumer can trigger the activate function to transition the portal to its active distribution phase.

However, if the deadline expires without reaching the target, the portal is marked as failed, triggering a full refund of both the consumer's budget and all staked SQD tokens back to their respective owners.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if the deadline is very long? Can providers unlock their funds with the normal exit mechanism? If so, do we even need this cancellation?


The data consumer allocation (contribution by the deployer) will be determined by the target amount that the data consumer is seeking.

We are collecting 120% of the amount that will be set by SQD.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it configurable?




### Exit Delay Formula
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please also describe interfaces of all the contracts? I think this document should be more detailed and serve as a blueprint for the exact implementation

- **Problem**: Two separate delay mechanisms
- When a provider requests exit from Portal, Portal needs to unstake from GatewayRegistry
- But GatewayRegistry requires `lockEnd <= block.number` to unstake
- How can we synchronize these two timelines? Should we base it on the minimum lock period plus a percentage of the GatewayRegistry lock? (Minimum + as Base the GatewayRegistry lock + percentual lock?)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say we should just set staking duration equal to one epoch from this document, and during the minimal lockup period the pool (proxy) contract just won't allow you to withdraw from the underlying gateway contract.
The only drawback I see is that computationUnitsAmount could be higher if locked for the entire minimal lockup period, but let's say it's the price you pay for locking borrowed funds instead of owned SQD

While the portal is active, SQD providers can claim their proportional share of accumulated rewards at any time by calling the claimRewards function on the PortalProxy, which calculates their share based on their staked balance relative to the total tokens in the portal and transfers the corresponding tokens to them. The portal continues distributing as long as the data consumer injects tokens through the distribute function, with all distributions based on the FeeRouterModule configured splits.

When SQD providers stake their tokens into the portal, they lock them for a minimum duration period.
After this minimum lock period expires, providers can request to exit the portal by calling requestExit with their desired withdrawal amount.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So what happens to the registered portal when the token withdrawal is requested? Do I understand correctly that we keep those extra 20% liquid in the pool contract to be able to refund immediately if needed? But what happens when they run out? SQD can't be unstacked immediately from the gateway contract, and if you request unstaking in advance, the portal will stop being active.

I'm starting to think that we may need a new, much simpler, portal registration contract

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed with @dzhelezov that locking funds was intended to protect token price.

I think an ideal solution would be something like this:

  • The raw portal registry contract allows immediate withdrawals but only for a limited amount per epoch (let's say 100k). If you request to withdraw more, funds will be gradually unlocked (reducing CU) and waiting on the contract to be collected.
  • The portal pool builds around this limitation to distribute already unlocked funds among requesters, making slow exits even slower when multiple people want to withdraw simultaneously. No additional limits on the portal pool side are needed then — it allows you to withdraw as fast as the core contract allows it.

This solution may be much harder to implement than "no withdrawal limits on the registry contract", so I think we can start with that one and make registry contract upgradeable to implement this logic later

- **Closed**: Portal closed?


### To Discuss
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we also talked about support for transferring the stake. Something to add on the next iteration probably, but maybe worth adding to the design doc

Updated terminology, adjusted roles and refined descriptions for accuracy and consistency
@Gradonsky
Copy link

https://docs.kiln.fi/v1/kiln-products/onchain/pooled-staking/key-concepts/exit-and-withdrawal
Withdrawal requests are queued, new deposits or validator exits fill tickets sequentially.
Users can redeem tickets when liquidity is available, allowing incremental exits without unstaking the entire pool.

https://docs.lido.fi/contracts/withdrawal-queue-erc721/
Also like in the first case users join a withdrawal queue and receive an NFT ticket.
The queue is serviced as ETH becomes available, allowing partial exits without unstaking all validators.

https://docs.liquidcollective.io/eth/tokenomics/redemptions

https://blog.pstake.finance/2023/12/08/user-guide-how-to-liquid-stake-osmo-on-pstake/
Flash Unstake uses a liquidity buffer to match withdrawals with new deposits. Regular unstakes respect the underlying chain’s unbonding period.

The factory deploys a single PortalPool contract, an upgradeable instance that combines both the core distribution logic and SQD vault functionality into one unified contract.
Once deployed, SQD token providers can stake their tokens directly into the PortalPool by calling the stake function with the portal pool address and desired amount.

During this collection phase, the portal pool remains in a "Collecting" state where it accumulates SQD deposits from multiple providers until either the maximum capacity is reached or the deposit deadline passes.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to have a soft cap, so that when the deposits surpass the minimum stake amount, the portal can be activated immediately while still seeking SQD providers?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That may actually be a good idea 🤔
We would then only have 3 states, right? Inactive — lower than the minimum required SQD, Partial — able to run but open for more SQD, and Full — no more capacity for SQD providers

Let's put it in the doc and discuss with everyone else on a meeting

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

10.Nov Call:
Instead of introducing three separate states, we’ll rely on the flexibility of the new GatewayRegistry contract. Each stake is passed directly to the GatewayRegistry, and once the total stake is above MINIMUM (100K), we can deterministically calculate the CUs, since CUs are exposed via a view function

- SQD provider positions could be represented as Liquid Stake Tokens (fungible) or NFTs (non-fungible)
- Issue: LSTs would be tied to each portal pool (potentially 100+ different tokens if many portals exist)


Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

10.Nov Call:

Contract: PortalPool & System Architecture
Type: Design Decision

Staking Limits per Portal:

Question: Should we enforce a maximum maxCapacity (hard cap) for each PortalPool, or should we allow unlimited staking and let APY dilution act as a natural cap?

IMO. this approach is simpler and trusts our economic model to balance the ecosystem.
It encourages genuine "competition" among operators rather than just forcing capital fragmentation.
The "optimal" stake (e.g. 1M SQD) can be a strong recommendation on the UI, not a rigid on-chain rule.

Question is:
What are the potential drawback or potential risks of not having a hard cap

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We definitely want the pool operator to set a cap — simply because otherwise it will be hard to control the APY. Even in our first deployment we likely start with say 1M cap then quickly fill it and extend to 10M, but not more.

- SQD provider positions could be represented as Liquid Stake Tokens (fungible) or NFTs (non-fungible)
- Issue: LSTs would be tied to each portal pool (potentially 100+ different tokens if many portals exist)


Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

10.Nov Call:

Contract: new GatewayRegistery
Type: Feature Proposal
TWAS (Time-Weighted Average Stake):
a metric normally used in staking and reward distribution systems to measure how much and for how long a user has staked their tokens.

In our case, TWAS can be used to introduce a boost factor for CUs that rewards long-running portals with higher stakes.

Example:
Maintain TWAS > 500K SQD for 30 days -> 1.05x CU boost
Maintain TWAS > 1M SQD for 60 days -> 1.10x CU boost

Implementation:
We would use here a Cumulative Value pattern, pioneered by Uniswap V2 for its Time-Weighted Average Price (TWAP).
Instead of storing a history of stakes, we store a single, ever-increasing number: stakeCumulative.
This variable represents the integral of the stake amount over time (measured in blocks).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My proposal is to remove boosting at all. It will probably be compensated by the mechanics of the portal pools anyway

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How will the distribution work then? Will everyone get not $S_i/\sum(S_j)$ of distributed USDC but something like $S_i B_i / \sum(S_j B_j)$?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, exactly.
We are aiming for a Weighted Stake model to incentivize long-term liquidity.
Bi is the boost factor derived from the lock duration (TWAS).
This implies we track totalWeightedStake (Effective Stake) in the contract rather than just raw totalStake. This adds state management complexity (recalculating weights on modification/expiry) when user interacts (lazy state update)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed with DZ that

  • It's better to have pre-defined lock time for the boosts because we can then measure how much of the supply is locked in the contracts.
  • To avoid sharding the pool by each token's "duration", we can implement it on top of the simple solution. If you get an ERC20 token for locking SQD, you can then lock that token in another contract for the specified duration to get some reward for it. And we can agree on the particular reward mechanism later to keep it out of scope of the current implementation

### Active Distribution and Fee Routing
Once activated, the portal pool enters its Active state where it begins distributing

Throughout this active period, the portal operator can call the distribute function to inject tokens into the contract, which will be distributed across SQD providers etc.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DZ proposed to have a "gradual" distribution instead of direct payments to SQD providers. Something similar to how the worker rewards work now. How I see it could work:

  • For portal operators the workflow stays almost the same — they will top up the contract once in a long period, e.g. month. However, at the pool creation, they also specify the expected earnings in USDC/day and can modify it later. The top-up amount stays in the contract instead of being immediately distributed.

  • SQD providers can see their expected share of USDC/day of the given pool when locking SQD (also converted to APY by the current SQD price as a visual hint). In the UI they will be able to see their "current balance" of unclaimed USDC and can claim it to their wallet by issuing a transaction.

This actually achieves multiple goals:

  • Better UX for delegators because they start earning every minute from the moment they join
  • Better UX for operators because they clearly understand how much to pay and can compare their offer to other pools
  • Portal operators can pre-fund their pool so that delegators can be more confident in future earnings
  • Receiver pays for gas for ERC20 transfers

The main problem is what happens when the balance on the token gets down to 0.

  • One option is to allow "negative balances" to be topped up in the future, while continuing to "promise" stable earnings to delegators. In this case someone may end up with visible balance that they can't actually claim.
  • Another option is to stop topping up the balances at the moment when the contract can't back every "promised" dollar with the current balance, and start recalculating the actual APY accordingly. This may be much harder to implement but sounds more fair for everyone.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO let's keep it honest & dynamic.
The "negative balance" approach is risky -> imagine a provider sees "$100
earned" but when they try to claim, they get $0. This could kill the trust.

Worst case: operator doesn't top up and we end up showing huge negative
numbers that nobody can actually withdraw :/

I'd go with dynamic rate adjustment, so when the pool runs low, the rate
adjusts to reflect reality.
No fake promises. If operator doesn't fund it, the rate just drops to 0 until they top up again.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How I see this could work in the contract

  • The portal operator sets the distribution rate $R$ USDC/block and can change it later at any moment.
  • The portal operator pre-funds the contract and can later top it up at any frequency, prolonging the runway. He/she can see the contract balance at the current block and should make sure that it never reaches 0.
  • The fee $f \in [0, 1]$ is set in the contract to be collected to the treasury, leaving only $R \cdot (1 - f)$ to be distributed to the delegators.
  • If someone owns $w$ of the total stake ($w \in [0, 1]$), he gets $w \cdot R \cdot (1 - f)$ USDC "added" to his balance each block. They can see the withdrawable balance at the current block and the rate at which it changes.

Then we just need to make sure that at any moment the balance is withdrawable, meaning that the sum of balances of all delegators doesn't exceed the amount of USDC sitting in the contract.
If the portal operator always tops up the contract at least at the rate of $R$, it's already achieved. So what happens if they fail to top up in time?

  1. The earning rate displayed to the delegators immediately becomes 0, and the withdrawable balance stays at the current number. I think we agreed on that in the previous comment — no fake promises.
  2. We can still display something like the average earning rate — the total amount of top-ups in the history of the pool divided by the duration it exists for. It will slowly start to get lower every block.
  3. When the operator decides to top up again, I can see two options for what should happen:
    1. All the balances are recalculated, and we continue running as before, starting from the current block. The average earnings then get lower than what was promised.
    2. We can force the operator to pay the debt and fail the transaction if they don't add enough funds. So the delegators won't be able to claim more rewards while the balance is zero, but later they will claim the full amount without even noticing that the wallet was empty at some point.
  4. Until it's topped up, the delegators only have an option to keep waiting or unstake according to the usual rules.

3.ii is a much better experience for the delegators with the only risk that if the operator starts earning less and forgets about the balance running out, it may be hard for them to ever recover from that state.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally, we should let the operator decide whether to pay the debt at the time of topping up or not. And then provide a good analytics page to view the history

Copy link
Contributor

@kalabukdima kalabukdima Dec 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After discussion with EF we came to the conclusion that it should be enough to implement option 3ii. Forcing the operator to pay the debt covers the most important case — when the operator just forgot to top up. It also enables a much simpler APR calculation.
If the operator doesn't want to pay the debt, they have an option to close the pool, unlocking SQD.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's how I see the implementation

Smart contracts

At the portal creation, the operator defines the distribution rate per second, which gets split into the worker/burning fees and distributed delegator rewards per second. This rate is split among all the delegators depending on their share (rewards_rate * locked_tokens / pool_capacity). Later the operator can change the distribution rate.

Once the contract is topped up, it recalculates the current "operator balance" and saves a checkpoint — the new balance and current timestamp. This information is enough to return the current balance in a read method at any point later (last_balance + time_delta * rate). An event corresponding to the checkpoint is also emitted to allow indexers to replicate this behaviour and plot the balance change over time.

Such recalculation also has to be done on every change in the total staked amount, because the difference between the total stake and the capacity should stay on the pool's balance.

At every rate change and pool capacity change, the rewards rate changes for all the delegators. At these points the contract recalculates the new rate for each of them, stores checkpoints in the same way, and emits the corresponding events. Knowing the last checkpoint and the distribution rate allows you to calculate the claimable balance at any moment in the future. This calculation should also consider the runway of the current operator balance. The formula becomes something like last_balance + min(time_delta, runway) * total_reward_rate * stake / total_capacity.

Every time the delegator claims the rewards, the checkpoint is also created, but only for this delegator.

UI

After activation, the pool is always in one of two states:

  • active — distributing exactly the specified rate, or
  • out of money — the operator forgot to top up the pool, so it's not known whether he will top it up later or never top it up again.

In the first case, the UI can get the current (at the moment T) balance and the distribution rate, and start showing a constantly increasing number.
In the second case, it should just show the current balance (it's not increasing anymore), and warn the user that the pool is out of money until it's topped up again.

After the pool is topped up, the historical APR gets back to normal without any slumps because the missing funds were compensated.

image


**Two-Step Withdrawal Process:**

1. **Portal Pool Exit Delay**: Exits are subject to a time-delay mechanism designed to prevent sudden liquidity shocks. The exit delay consists of a base period of 1 epoch plus a percentual delay calculated by the amount being withdrawn. The system allows a maximum of 1% of the total portal pool liquidity to exit per epoch, meaning if a provider wants to exit 5% of the liquidity, they must wait 1 epoch (base) plus 5 additional epochs (one epoch per 1% of liquidity), totaling 6 epochs before their full withdrawal is processed. Providers can withdraw unlocked portions incrementally (1% per epoch) rather than waiting for the full delay period to complete.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To protect against the "whales" who have many locked tokens, we should use absolute values for allowed unlock speed.

With the described mechanism, everyone will have to wait the same duration to unlock 100% of their stake — no matter if you own the entire pool or just 1 SQD.
Instead it should be something like "everyone can unlock a limited amount per block" making it easy to unlock 100 SQD but making you wait (and probably submit multiple transactions) to unlock half of the pool.

Copy link

@Gradonsky Gradonsky Dec 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following our call, there is a problem regarding the withdrawal mechanisms.
Both current ideas have open issues:
Percentage-based unlock (e.g. 1% per epoch)

  • Pros: scales with pool size, naturally slows down large withdrawals.
  • Cons: can be bypassed by splitting stake across many wallets / LST positions (e.g. 500 × 0.5% = same as a big whale exit).

Absolute unlock rate

  • Pros: fair per-unit rate for everyone, independent of pool share, whales can’t drain everything in a single tx
  • Cons: same fragmentation problem (many wallets)

One idea discussed on the call was to combine a fixed base delay (e.g. N blocks) with an absolute per-block unlock cap, so even if exits are split across multiple wallets, everyone still waits at least the base period.

As mentioned both are still vulnerable to splitting LPTs (Liquid Portal Tokens) across many wallets (same bypass issue)

We need to decide whether to adopt:

  • absolute cap
  • percentage cap
  • or base delay + cap hybrid
  • ideas?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My initial proposal

There is a global limit enforced by the underlying registry contract — $T$ SQD can be unlocked per block, let's say 100. Now suppose someone owns $w$ of the total stake ($w \in [0, 1]$). Then I suggest this delegator can unstake $w \cdot T$ SQD per block. This way it doesn't matter whether you split your delegation into multiple piles or not — every SQD locked in the contract gets unlocked at the same rate, making it 100 times faster to unlock 1000 SQD than it takes to unlock 100,000 SQD.

Also, there is a limit on how much can be unlocked in a single transaction — to guarantee that the active SQD balance doesn't immediately get too low, making the portal unusable. And you're right, @Gradonsky, this limit can be avoided by sending your stake to multiple contracts.

I can see a couple of options if we want to solve this problem.

Option 1. Calculate the rate depending on the current withdrawal queue

Instead of using the share of the total stake $w$, only calculate how much every currently withdrawing delegator requests and split the rate $T$ proportionally between them. So that if only one delegator is withdrawing, he gets the maximum rate, $T$. If one is withdrawing 100k SQD and another requested a withdrawal of 200k SQD, starting from that transaction, the first gets $\frac{1}{3} T$ SQD unlocked per block, and the second gets $\frac{2}{3} T$ SQD unlocked per block.

This way it doesn't make sense to split the stake between multiple wallets — you'll still get the same total withdrawal rate. But you can still destabilize the portal by doing that.

Additionally, we can limit the number of concurrent active withdrawals, making the next request wait until someone else's withdrawal request gets fulfilled.

Option 2. Only allow one withdrawal at a time.

A much simpler approach in terms of implementation, but maybe not fair enough. If we only allow one active (limited) withdrawal, making everyone else wait until it's completed, then it can be fulfilled at the maximum rate of $T$. After that, either the same person or anyone else submits their withdrawal request and waits until it's completed. Additionally, we can disable partial withdrawals, only allowing the claim of the whole requested amount after the unlock period.

Together with the ability to transfer/sell your position on the market and the ability to request immediate withdrawal by paying the fee, I think this is not a bad option.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In Option 1.
What happens if there is a whale who wants to withdraw 100K SQD, and an "attacker" who splits 10K SQD into 1,000 wallets with 10 SQD each and requests withdrawals on all of them?
Does this dilute the whale so they receive only a small fraction of the withdrawal throughput, while the attacker’s many small wallets take most of the bandwidth?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In option 1, which is still based on relative stake being withdrawn, the whale will get 10/11-th of the rate, and the attacker will get 1/11-th in total (1/11000 per wallet).

If we limit the number of concurrent active withdrawals, then yes, the whale will have to wait until those 10k SQD of an attacker become fully unlocked. However, it will take the same amount of time to withdraw 10k SQD at once as it would take to wait for 1000 withdrawals with 10 SQD each. So for the attacker, it doesn't make much sense to split it past the concurrency limit. And they need to own quite a big share to actually mess up with other delegators, which is a reasonable limitation, I think.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought the limit of the queue size could be something like 10. How much gas would it cost then approx? Also, only those who enter later will have an increased gas usage, right?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm. what do you think about Cumulative Conveyor Belt (Global Leaky Bucket) model.

This system tracks a single global totalRequested counter and processes exits at a fixed globalUnlockRate (e.g., 100 SQD/block), effectively placing every request on a continuous timeline. This renders Sybil attacks useless because the time cost is determined solely by the total volume ahead of you, not the number of participants: a Whale requesting 100k SQD occupies the same "length" on the belt as an Attacker splitting 100k SQD into 1,000 wallets, meaning the last split transaction finishes at the exact same block as the single large transaction would have.

Queue:

  [====User A (1000 SQD)====][====User B (5000SQD)====][==User C (500 SQD)==]
          ←── Belt moves at fixed rate (1000 SQD/block) ──→
  • Each exit request gets a "ticket" with a position on the
    belt
  • The belt moves at constant speed regardless of who's in
    line
  • You can claim when your position has been "processed"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, right, this is similar to option 2, but instead of rejecting your request, you get to the end of the queue, knowing in advance how long you'll have to wait. I like it!
The only catch is we'll have to limit the total length of the belt so that the portal doesn't get a quick drop in CUs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dzhelezov do you think it will be too restrictive if we queue up everyone who wants to withdraw their stake? Usually everyone will request at different times without even noticing this mechanism. But if there is a massive run, the first-to-request – first-out rule starts to work. It simplifies implementation significantly as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes I think this "conveyor belt" is the right approach -- and afaik smth similar is implemented for the etereum validator withdrawal queue.

- Issue: LSTs would be tied to each portal pool (potentially 100+ different tokens if many portals exist)



Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

28.Nov Call:

  • Let's limit the maximum stake from a single wallet to discourage a single party from owning too big share.

  • We probably still want to have "boosts" of returns if locked SQD for a long time.

  • It would be great to make your position in the pool transferable, ideally with ERC-1155 tokens. Then the delegators may start selling their positions to exit immediately without affecting the pool.

- SQD provider positions could be represented as Liquid Stake Tokens (fungible) or NFTs (non-fungible)
- Issue: LSTs would be tied to each portal pool (potentially 100+ different tokens if many portals exist)


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How will the distribution work then? Will everyone get not $S_i/\sum(S_j)$ of distributed USDC but something like $S_i B_i / \sum(S_j B_j)$?


### To Discuss

1. **Liquid Stake Tokens (LST) vs NFTs**:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Decision needs to be made here:
ERC-20 (Separate token per portal)
Upsides:

  • Much simpler to understand for users
  • Basically every DeFi protocol supports ERC20 out of the box
  • Easy wallet support (standard token flow)
  • Can trade instantly on Uniswap/DEXs, super straightforward (after pool creation)
  • Each portal = its own token + symbol (example: LPT-PORTAL1)

Downsides:

  • If we have 100+ portals → that’s 100+ token contracts deployed
  • I’m not bringing up gas because on Arbitrum it barely matters
  • Harder to track all user positions in a single clean view
  • No unified contract that holds everything together
  • User must manually add each token to the wallet

ERC-1155 (One contract, many token IDs)
Upsides:

  • One contract to rule all portal positions
  • Single deployment, much cleaner on the infra side
  • Batch transfers possible (exit multiple portals in one tx)
  • Super easy portfolio querying (one contract = full overview)
  • Native metadata URI support

Downsides:

  • We lose most DeFi compatibility
  • Harder to trade on secondary markets
  • Wallets show it differently (NFT-style vs token-style)
  • More complex approval flow (setApprovalForAll)

IMO: we should stick with ERC20 as it simply unlocks way more value through DeFi integrations.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure creating a pool per token on DEXes can deliver a good UX — the liquidity there will mostly stay at zero I think.

I hope DeFi infrastructure will eventually mature to support trading ERC-1155 tokens. Until then, it's not really clear how we can provide a good experience.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed with DZ and EF that it should be separate ERC20 tokens to make use of better infrastructure. We don't really care about liquidity and being able to sell that token. There are already enough tools in the ecosystem to work with tokens that are not liquid

- Issue: LSTs would be tied to each portal pool (potentially 100+ different tokens if many portals exist)



Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important thing to not forget about: the rewards rate for a single delegator shouldn't depend on the total staked amount. If the pool is not full, we should only pay out the share proportional to the total capacity, not to the staked amount.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants