-
Notifications
You must be signed in to change notification settings - Fork 1
Tokenomics 2.1 #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
I like this version much more than the 2.0! |
network-rfc/12_contracts_design.md
Outdated
|
|
||
| ### Reward Claims, Exits, and Closure | ||
|
|
||
| While the portal is active, SQD providers can claim their proportional share of accumulated rewards at any time by calling the claimRewards function on the PortalProxy, which calculates their share based on their staked balance relative to the total tokens in the portal and transfers the corresponding tokens to them. The portal continues distributing as long as the data consumer injects tokens through the distribute function, with all distributions based on the FeeRouterModule configured splits. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the claimRewards function calculate their share? I think you won't be able to account for past clamis this way, so you have to top up claimable amount at the distribution time. The same is already done in our WorkerRegistration contract
network-rfc/12_contracts_design.md
Outdated
| During this collection phase, the portal remains in a "Collecting" state where it accumulates SQD deposits from multiple providers until either the target amount is reached or the deposit deadline passes. | ||
| If the target is met before the deadline, the data consumer can trigger the activate function to transition the portal to its active distribution phase. | ||
|
|
||
| However, if the deadline expires without reaching the target, the portal is marked as failed, triggering a full refund of both the consumer's budget and all staked SQD tokens back to their respective owners. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if the deadline is very long? Can providers unlock their funds with the normal exit mechanism? If so, do we even need this cancellation?
network-rfc/12_contracts_design.md
Outdated
|
|
||
| The data consumer allocation (contribution by the deployer) will be determined by the target amount that the data consumer is seeking. | ||
|
|
||
| We are collecting 120% of the amount that will be set by SQD. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it configurable?
network-rfc/12_contracts_design.md
Outdated
|
|
||
|
|
||
|
|
||
| ### Exit Delay Formula |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please also describe interfaces of all the contracts? I think this document should be more detailed and serve as a blueprint for the exact implementation
network-rfc/12_contracts_design.md
Outdated
| - **Problem**: Two separate delay mechanisms | ||
| - When a provider requests exit from Portal, Portal needs to unstake from GatewayRegistry | ||
| - But GatewayRegistry requires `lockEnd <= block.number` to unstake | ||
| - How can we synchronize these two timelines? Should we base it on the minimum lock period plus a percentage of the GatewayRegistry lock? (Minimum + as Base the GatewayRegistry lock + percentual lock?) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say we should just set staking duration equal to one epoch from this document, and during the minimal lockup period the pool (proxy) contract just won't allow you to withdraw from the underlying gateway contract.
The only drawback I see is that computationUnitsAmount could be higher if locked for the entire minimal lockup period, but let's say it's the price you pay for locking borrowed funds instead of owned SQD
network-rfc/12_contracts_design.md
Outdated
| While the portal is active, SQD providers can claim their proportional share of accumulated rewards at any time by calling the claimRewards function on the PortalProxy, which calculates their share based on their staked balance relative to the total tokens in the portal and transfers the corresponding tokens to them. The portal continues distributing as long as the data consumer injects tokens through the distribute function, with all distributions based on the FeeRouterModule configured splits. | ||
|
|
||
| When SQD providers stake their tokens into the portal, they lock them for a minimum duration period. | ||
| After this minimum lock period expires, providers can request to exit the portal by calling requestExit with their desired withdrawal amount. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So what happens to the registered portal when the token withdrawal is requested? Do I understand correctly that we keep those extra 20% liquid in the pool contract to be able to refund immediately if needed? But what happens when they run out? SQD can't be unstacked immediately from the gateway contract, and if you request unstaking in advance, the portal will stop being active.
I'm starting to think that we may need a new, much simpler, portal registration contract
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed with @dzhelezov that locking funds was intended to protect token price.
I think an ideal solution would be something like this:
- The raw portal registry contract allows immediate withdrawals but only for a limited amount per epoch (let's say 100k). If you request to withdraw more, funds will be gradually unlocked (reducing CU) and waiting on the contract to be collected.
- The portal pool builds around this limitation to distribute already unlocked funds among requesters, making slow exits even slower when multiple people want to withdraw simultaneously. No additional limits on the portal pool side are needed then — it allows you to withdraw as fast as the core contract allows it.
This solution may be much harder to implement than "no withdrawal limits on the registry contract", so I think we can start with that one and make registry contract upgradeable to implement this logic later
| - **Closed**: Portal closed? | ||
|
|
||
|
|
||
| ### To Discuss |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we also talked about support for transferring the stake. Something to add on the next iteration probably, but maybe worth adding to the design doc
Updated terminology, adjusted roles and refined descriptions for accuracy and consistency
|
https://docs.kiln.fi/v1/kiln-products/onchain/pooled-staking/key-concepts/exit-and-withdrawal https://docs.lido.fi/contracts/withdrawal-queue-erc721/ https://docs.liquidcollective.io/eth/tokenomics/redemptions https://blog.pstake.finance/2023/12/08/user-guide-how-to-liquid-stake-osmo-on-pstake/ |
| The factory deploys a single PortalPool contract, an upgradeable instance that combines both the core distribution logic and SQD vault functionality into one unified contract. | ||
| Once deployed, SQD token providers can stake their tokens directly into the PortalPool by calling the stake function with the portal pool address and desired amount. | ||
|
|
||
| During this collection phase, the portal pool remains in a "Collecting" state where it accumulates SQD deposits from multiple providers until either the maximum capacity is reached or the deposit deadline passes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to have a soft cap, so that when the deposits surpass the minimum stake amount, the portal can be activated immediately while still seeking SQD providers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That may actually be a good idea 🤔
We would then only have 3 states, right? Inactive — lower than the minimum required SQD, Partial — able to run but open for more SQD, and Full — no more capacity for SQD providers
Let's put it in the doc and discuss with everyone else on a meeting
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
10.Nov Call:
Instead of introducing three separate states, we’ll rely on the flexibility of the new GatewayRegistry contract. Each stake is passed directly to the GatewayRegistry, and once the total stake is above MINIMUM (100K), we can deterministically calculate the CUs, since CUs are exposed via a view function
| - SQD provider positions could be represented as Liquid Stake Tokens (fungible) or NFTs (non-fungible) | ||
| - Issue: LSTs would be tied to each portal pool (potentially 100+ different tokens if many portals exist) | ||
|
|
||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
10.Nov Call:
Contract: PortalPool & System Architecture
Type: Design Decision
Staking Limits per Portal:
Question: Should we enforce a maximum maxCapacity (hard cap) for each PortalPool, or should we allow unlimited staking and let APY dilution act as a natural cap?
IMO. this approach is simpler and trusts our economic model to balance the ecosystem.
It encourages genuine "competition" among operators rather than just forcing capital fragmentation.
The "optimal" stake (e.g. 1M SQD) can be a strong recommendation on the UI, not a rigid on-chain rule.
Question is:
What are the potential drawback or potential risks of not having a hard cap
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We definitely want the pool operator to set a cap — simply because otherwise it will be hard to control the APY. Even in our first deployment we likely start with say 1M cap then quickly fill it and extend to 10M, but not more.
| - SQD provider positions could be represented as Liquid Stake Tokens (fungible) or NFTs (non-fungible) | ||
| - Issue: LSTs would be tied to each portal pool (potentially 100+ different tokens if many portals exist) | ||
|
|
||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
10.Nov Call:
Contract: new GatewayRegistery
Type: Feature Proposal
TWAS (Time-Weighted Average Stake):
a metric normally used in staking and reward distribution systems to measure how much and for how long a user has staked their tokens.
In our case, TWAS can be used to introduce a boost factor for CUs that rewards long-running portals with higher stakes.
Example:
Maintain TWAS > 500K SQD for 30 days -> 1.05x CU boost
Maintain TWAS > 1M SQD for 60 days -> 1.10x CU boost
Implementation:
We would use here a Cumulative Value pattern, pioneered by Uniswap V2 for its Time-Weighted Average Price (TWAP).
Instead of storing a history of stakes, we store a single, ever-increasing number: stakeCumulative.
This variable represents the integral of the stake amount over time (measured in blocks).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My proposal is to remove boosting at all. It will probably be compensated by the mechanics of the portal pools anyway
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How will the distribution work then? Will everyone get not
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, exactly.
We are aiming for a Weighted Stake model to incentivize long-term liquidity.
Bi is the boost factor derived from the lock duration (TWAS).
This implies we track totalWeightedStake (Effective Stake) in the contract rather than just raw totalStake. This adds state management complexity (recalculating weights on modification/expiry) when user interacts (lazy state update)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed with DZ that
- It's better to have pre-defined lock time for the boosts because we can then measure how much of the supply is locked in the contracts.
- To avoid sharding the pool by each token's "duration", we can implement it on top of the simple solution. If you get an ERC20 token for locking SQD, you can then lock that token in another contract for the specified duration to get some reward for it. And we can agree on the particular reward mechanism later to keep it out of scope of the current implementation
| ### Active Distribution and Fee Routing | ||
| Once activated, the portal pool enters its Active state where it begins distributing | ||
|
|
||
| Throughout this active period, the portal operator can call the distribute function to inject tokens into the contract, which will be distributed across SQD providers etc. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DZ proposed to have a "gradual" distribution instead of direct payments to SQD providers. Something similar to how the worker rewards work now. How I see it could work:
-
For portal operators the workflow stays almost the same — they will top up the contract once in a long period, e.g. month. However, at the pool creation, they also specify the expected earnings in USDC/day and can modify it later. The top-up amount stays in the contract instead of being immediately distributed.
-
SQD providers can see their expected share of USDC/day of the given pool when locking SQD (also converted to APY by the current SQD price as a visual hint). In the UI they will be able to see their "current balance" of unclaimed USDC and can claim it to their wallet by issuing a transaction.
This actually achieves multiple goals:
- Better UX for delegators because they start earning every minute from the moment they join
- Better UX for operators because they clearly understand how much to pay and can compare their offer to other pools
- Portal operators can pre-fund their pool so that delegators can be more confident in future earnings
- Receiver pays for gas for ERC20 transfers
The main problem is what happens when the balance on the token gets down to 0.
- One option is to allow "negative balances" to be topped up in the future, while continuing to "promise" stable earnings to delegators. In this case someone may end up with visible balance that they can't actually claim.
- Another option is to stop topping up the balances at the moment when the contract can't back every "promised" dollar with the current balance, and start recalculating the actual APY accordingly. This may be much harder to implement but sounds more fair for everyone.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO let's keep it honest & dynamic.
The "negative balance" approach is risky -> imagine a provider sees "$100
earned" but when they try to claim, they get $0. This could kill the trust.
Worst case: operator doesn't top up and we end up showing huge negative
numbers that nobody can actually withdraw :/
I'd go with dynamic rate adjustment, so when the pool runs low, the rate
adjusts to reflect reality.
No fake promises. If operator doesn't fund it, the rate just drops to 0 until they top up again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How I see this could work in the contract
- The portal operator sets the distribution rate
$R$ USDC/block and can change it later at any moment. - The portal operator pre-funds the contract and can later top it up at any frequency, prolonging the runway. He/she can see the contract balance at the current block and should make sure that it never reaches 0.
- The fee
$f \in [0, 1]$ is set in the contract to be collected to the treasury, leaving only$R \cdot (1 - f)$ to be distributed to the delegators. - If someone owns
$w$ of the total stake ($w \in [0, 1]$ ), he gets$w \cdot R \cdot (1 - f)$ USDC "added" to his balance each block. They can see the withdrawable balance at the current block and the rate at which it changes.
Then we just need to make sure that at any moment the balance is withdrawable, meaning that the sum of balances of all delegators doesn't exceed the amount of USDC sitting in the contract.
If the portal operator always tops up the contract at least at the rate of
- The earning rate displayed to the delegators immediately becomes 0, and the withdrawable balance stays at the current number. I think we agreed on that in the previous comment — no fake promises.
- We can still display something like the average earning rate — the total amount of top-ups in the history of the pool divided by the duration it exists for. It will slowly start to get lower every block.
- When the operator decides to top up again, I can see two options for what should happen:
- All the balances are recalculated, and we continue running as before, starting from the current block. The average earnings then get lower than what was promised.
- We can force the operator to pay the debt and fail the transaction if they don't add enough funds. So the delegators won't be able to claim more rewards while the balance is zero, but later they will claim the full amount without even noticing that the wallet was empty at some point.
- Until it's topped up, the delegators only have an option to keep waiting or unstake according to the usual rules.
3.ii is a much better experience for the delegators with the only risk that if the operator starts earning less and forgets about the balance running out, it may be hard for them to ever recover from that state.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally, we should let the operator decide whether to pay the debt at the time of topping up or not. And then provide a good analytics page to view the history
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After discussion with EF we came to the conclusion that it should be enough to implement option 3ii. Forcing the operator to pay the debt covers the most important case — when the operator just forgot to top up. It also enables a much simpler APR calculation.
If the operator doesn't want to pay the debt, they have an option to close the pool, unlocking SQD.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's how I see the implementation
Smart contracts
At the portal creation, the operator defines the distribution rate per second, which gets split into the worker/burning fees and distributed delegator rewards per second. This rate is split among all the delegators depending on their share (rewards_rate * locked_tokens / pool_capacity). Later the operator can change the distribution rate.
Once the contract is topped up, it recalculates the current "operator balance" and saves a checkpoint — the new balance and current timestamp. This information is enough to return the current balance in a read method at any point later (last_balance + time_delta * rate). An event corresponding to the checkpoint is also emitted to allow indexers to replicate this behaviour and plot the balance change over time.
Such recalculation also has to be done on every change in the total staked amount, because the difference between the total stake and the capacity should stay on the pool's balance.
At every rate change and pool capacity change, the rewards rate changes for all the delegators. At these points the contract recalculates the new rate for each of them, stores checkpoints in the same way, and emits the corresponding events. Knowing the last checkpoint and the distribution rate allows you to calculate the claimable balance at any moment in the future. This calculation should also consider the runway of the current operator balance. The formula becomes something like last_balance + min(time_delta, runway) * total_reward_rate * stake / total_capacity.
Every time the delegator claims the rewards, the checkpoint is also created, but only for this delegator.
UI
After activation, the pool is always in one of two states:
- active — distributing exactly the specified rate, or
- out of money — the operator forgot to top up the pool, so it's not known whether he will top it up later or never top it up again.
In the first case, the UI can get the current (at the moment T) balance and the distribution rate, and start showing a constantly increasing number.
In the second case, it should just show the current balance (it's not increasing anymore), and warn the user that the pool is out of money until it's topped up again.
After the pool is topped up, the historical APR gets back to normal without any slumps because the missing funds were compensated.
|
|
||
| **Two-Step Withdrawal Process:** | ||
|
|
||
| 1. **Portal Pool Exit Delay**: Exits are subject to a time-delay mechanism designed to prevent sudden liquidity shocks. The exit delay consists of a base period of 1 epoch plus a percentual delay calculated by the amount being withdrawn. The system allows a maximum of 1% of the total portal pool liquidity to exit per epoch, meaning if a provider wants to exit 5% of the liquidity, they must wait 1 epoch (base) plus 5 additional epochs (one epoch per 1% of liquidity), totaling 6 epochs before their full withdrawal is processed. Providers can withdraw unlocked portions incrementally (1% per epoch) rather than waiting for the full delay period to complete. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To protect against the "whales" who have many locked tokens, we should use absolute values for allowed unlock speed.
With the described mechanism, everyone will have to wait the same duration to unlock 100% of their stake — no matter if you own the entire pool or just 1 SQD.
Instead it should be something like "everyone can unlock a limited amount per block" making it easy to unlock 100 SQD but making you wait (and probably submit multiple transactions) to unlock half of the pool.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following our call, there is a problem regarding the withdrawal mechanisms.
Both current ideas have open issues:
Percentage-based unlock (e.g. 1% per epoch)
- Pros: scales with pool size, naturally slows down large withdrawals.
- Cons: can be bypassed by splitting stake across many wallets / LST positions (e.g. 500 × 0.5% = same as a big whale exit).
Absolute unlock rate
- Pros: fair per-unit rate for everyone, independent of pool share, whales can’t drain everything in a single tx
- Cons: same fragmentation problem (many wallets)
One idea discussed on the call was to combine a fixed base delay (e.g. N blocks) with an absolute per-block unlock cap, so even if exits are split across multiple wallets, everyone still waits at least the base period.
As mentioned both are still vulnerable to splitting LPTs (Liquid Portal Tokens) across many wallets (same bypass issue)
We need to decide whether to adopt:
- absolute cap
- percentage cap
- or base delay + cap hybrid
- ideas?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My initial proposal
There is a global limit enforced by the underlying registry contract —
Also, there is a limit on how much can be unlocked in a single transaction — to guarantee that the active SQD balance doesn't immediately get too low, making the portal unusable. And you're right, @Gradonsky, this limit can be avoided by sending your stake to multiple contracts.
I can see a couple of options if we want to solve this problem.
Option 1. Calculate the rate depending on the current withdrawal queue
Instead of using the share of the total stake
This way it doesn't make sense to split the stake between multiple wallets — you'll still get the same total withdrawal rate. But you can still destabilize the portal by doing that.
Additionally, we can limit the number of concurrent active withdrawals, making the next request wait until someone else's withdrawal request gets fulfilled.
Option 2. Only allow one withdrawal at a time.
A much simpler approach in terms of implementation, but maybe not fair enough. If we only allow one active (limited) withdrawal, making everyone else wait until it's completed, then it can be fulfilled at the maximum rate of
Together with the ability to transfer/sell your position on the market and the ability to request immediate withdrawal by paying the fee, I think this is not a bad option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In Option 1.
What happens if there is a whale who wants to withdraw 100K SQD, and an "attacker" who splits 10K SQD into 1,000 wallets with 10 SQD each and requests withdrawals on all of them?
Does this dilute the whale so they receive only a small fraction of the withdrawal throughput, while the attacker’s many small wallets take most of the bandwidth?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In option 1, which is still based on relative stake being withdrawn, the whale will get 10/11-th of the rate, and the attacker will get 1/11-th in total (1/11000 per wallet).
If we limit the number of concurrent active withdrawals, then yes, the whale will have to wait until those 10k SQD of an attacker become fully unlocked. However, it will take the same amount of time to withdraw 10k SQD at once as it would take to wait for 1000 withdrawals with 10 SQD each. So for the attacker, it doesn't make much sense to split it past the concurrency limit. And they need to own quite a big share to actually mess up with other delegators, which is a reasonable limitation, I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought the limit of the queue size could be something like 10. How much gas would it cost then approx? Also, only those who enter later will have an increased gas usage, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. what do you think about Cumulative Conveyor Belt (Global Leaky Bucket) model.
This system tracks a single global totalRequested counter and processes exits at a fixed globalUnlockRate (e.g., 100 SQD/block), effectively placing every request on a continuous timeline. This renders Sybil attacks useless because the time cost is determined solely by the total volume ahead of you, not the number of participants: a Whale requesting 100k SQD occupies the same "length" on the belt as an Attacker splitting 100k SQD into 1,000 wallets, meaning the last split transaction finishes at the exact same block as the single large transaction would have.
Queue:
[====User A (1000 SQD)====][====User B (5000SQD)====][==User C (500 SQD)==]
←── Belt moves at fixed rate (1000 SQD/block) ──→
- Each exit request gets a "ticket" with a position on the
belt - The belt moves at constant speed regardless of who's in
line - You can claim when your position has been "processed"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, right, this is similar to option 2, but instead of rejecting your request, you get to the end of the queue, knowing in advance how long you'll have to wait. I like it!
The only catch is we'll have to limit the total length of the belt so that the portal doesn't get a quick drop in CUs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dzhelezov do you think it will be too restrictive if we queue up everyone who wants to withdraw their stake? Usually everyone will request at different times without even noticing this mechanism. But if there is a massive run, the first-to-request – first-out rule starts to work. It simplifies implementation significantly as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes I think this "conveyor belt" is the right approach -- and afaik smth similar is implemented for the etereum validator withdrawal queue.
| - Issue: LSTs would be tied to each portal pool (potentially 100+ different tokens if many portals exist) | ||
|
|
||
|
|
||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
28.Nov Call:
-
Let's limit the maximum stake from a single wallet to discourage a single party from owning too big share.
-
We probably still want to have "boosts" of returns if locked SQD for a long time.
-
It would be great to make your position in the pool transferable, ideally with ERC-1155 tokens. Then the delegators may start selling their positions to exit immediately without affecting the pool.
| - SQD provider positions could be represented as Liquid Stake Tokens (fungible) or NFTs (non-fungible) | ||
| - Issue: LSTs would be tied to each portal pool (potentially 100+ different tokens if many portals exist) | ||
|
|
||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How will the distribution work then? Will everyone get not
|
|
||
| ### To Discuss | ||
|
|
||
| 1. **Liquid Stake Tokens (LST) vs NFTs**: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Decision needs to be made here:
ERC-20 (Separate token per portal)
Upsides:
- Much simpler to understand for users
- Basically every DeFi protocol supports ERC20 out of the box
- Easy wallet support (standard token flow)
- Can trade instantly on Uniswap/DEXs, super straightforward (after pool creation)
- Each portal = its own token + symbol (example: LPT-PORTAL1)
Downsides:
- If we have 100+ portals → that’s 100+ token contracts deployed
- I’m not bringing up gas because on Arbitrum it barely matters
- Harder to track all user positions in a single clean view
- No unified contract that holds everything together
- User must manually add each token to the wallet
ERC-1155 (One contract, many token IDs)
Upsides:
- One contract to rule all portal positions
- Single deployment, much cleaner on the infra side
- Batch transfers possible (exit multiple portals in one tx)
- Super easy portfolio querying (one contract = full overview)
- Native metadata URI support
Downsides:
- We lose most DeFi compatibility
- Harder to trade on secondary markets
- Wallets show it differently (NFT-style vs token-style)
- More complex approval flow (setApprovalForAll)
IMO: we should stick with ERC20 as it simply unlocks way more value through DeFi integrations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure creating a pool per token on DEXes can deliver a good UX — the liquidity there will mostly stay at zero I think.
I hope DeFi infrastructure will eventually mature to support trading ERC-1155 tokens. Until then, it's not really clear how we can provide a good experience.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed with DZ and EF that it should be separate ERC20 tokens to make use of better infrastructure. We don't really care about liquidity and being able to sell that token. There are already enough tools in the ecosystem to work with tokens that are not liquid
| - Issue: LSTs would be tied to each portal pool (potentially 100+ different tokens if many portals exist) | ||
|
|
||
|
|
||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important thing to not forget about: the rewards rate for a single delegator shouldn't depend on the total staked amount. If the pool is not full, we should only pay out the share proportional to the total capacity, not to the staked amount.
Tokenomics 2.1 addresses the centralization issues of Tokenomics 2.0, namely: