-
Notifications
You must be signed in to change notification settings - Fork 306
ADD RWKV7 #2421
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
pass-lin
wants to merge
40
commits into
keras-team:master
Choose a base branch
from
pass-lin:rwkv
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
ADD RWKV7 #2421
Changes from all commits
Commits
Show all changes
40 commits
Select commit
Hold shift + click to select a range
195ef79
add RWKV
pass-lin 7bc36b5
fix
pass-lin 7d4a7a1
fix
pass-lin e5bb446
add inference
pass-lin afcff31
add inference
pass-lin ec0baf3
add tokenizer doc
pass-lin bd6c618
add doc
pass-lin 4201a7f
add test case
pass-lin 897a64b
fix test
pass-lin ff11f94
fix doc
pass-lin ce13d54
fix gemini review.
pass-lin 0e36b4a
format.
pass-lin 7218888
format.
pass-lin cc5815b
save tokenizer
pass-lin dd80464
fix tokenizer load
pass-lin 5e8723d
fix save
pass-lin f223002
renew preset
pass-lin b2b1573
renew perset.
pass-lin c5ebeec
debug for remat
pass-lin 14111c8
modify by gemini review .
pass-lin a88ae01
modify
pass-lin 7f8bda7
modify
pass-lin 00200a8
modify
pass-lin e97b458
modify
pass-lin 75a4415
modify
pass-lin 8c3638b
modify
pass-lin 468dce1
modify rwkv casual lm.
pass-lin 637fdcb
modify tokenizer
pass-lin 24e67ec
fix test bug
pass-lin 4eb4845
fix test bug
pass-lin be4a649
fix test bug
pass-lin 28700d9
fix test bug
pass-lin 2e2d5c0
fix test bug
pass-lin 97b39cf
fix test bug
pass-lin 44e6476
fix test bug
pass-lin b7ed34b
fix test bug
pass-lin b3e33fd
fix test bug
pass-lin 75c8a88
modify RWKV7CausalLMPreprocessor
pass-lin eac1505
modify RWKV7CausalLMPreprocessor
pass-lin 06ec6c5
modify RWKV7CausalLMPreprocessor
pass-lin File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Some comments aren't visible on the classic Files Changed page.
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,179 @@ | ||
| import keras | ||
| from keras import ops | ||
|
|
||
| from keras_hub.src.api_export import keras_hub_export | ||
| from keras_hub.src.models.backbone import Backbone | ||
| from keras_hub.src.models.rwkv7.rwkv7_layer import RWKV7_Block | ||
|
|
||
|
|
||
| def rwkv7_kernel_initializer(stddev=0.02): | ||
| return keras.initializers.TruncatedNormal(stddev=stddev) | ||
|
|
||
|
|
||
| @keras_hub_export("keras_hub.models.RWKV7Backbone") | ||
| class RWKV7Backbone(Backbone): | ||
| """ | ||
| The RWKV7 Transformer core architecture with hyperparameters. | ||
|
|
||
| This network implements a RNN-based decoder network, | ||
| Goose, as described in | ||
| [RWKV-7](https://arxiv.org/abs/2503.14456). | ||
|
|
||
| This network implements a Modern RNN architecture based on linear | ||
| attention mechanisms with recurrent processing, as described in the | ||
| RWKV papers. It includes the embedding lookups and RWKV-7 blocks. | ||
|
|
||
| The default constructor gives a fully customizable, randomly initialized | ||
| RWKV-7 model with any number of layers, heads, and embedding dimensions. | ||
| To load preset architectures and weights, use the `from_preset` | ||
| constructor. | ||
|
|
||
| Args: | ||
| hidden_size: int. The size of the transformer encoding and pooling | ||
| layers. | ||
| head_size: int. The size of each attention head. | ||
| num_layers: int. The number of transformer layers. | ||
| vocabulary_size: int. The size of the token vocabulary. | ||
| intermediate_dim: int. The output dimension of the first Dense layer in | ||
| a two-layer feedforward network for each transformer. | ||
| gate_lora: int. LoRA dimension for gating.Defaults to 128. | ||
| mv_lora: int. LoRA dimension for value mixing.Defaults to 32. | ||
| aaa_lora: int. LoRA dimension for alpha parameters.Defaults to 64. | ||
| decay_lora: int. LoRA dimension for decay parameters.Defaults to 64. | ||
| dtype: string or `keras.mixed_precision.DTypePolicy`. The dtype to use | ||
| for model computations and weights. Note that some computations, | ||
| such as softmax and layer normalization, will always be done at | ||
| float32 precision regardless of dtype. | ||
| dropout_rate: float. Dropout rate for the dropout layer. | ||
|
|
||
| Examples: | ||
|
|
||
| ```python | ||
| input_data = np.ones(shape=(1, 12), dtype="int32") | ||
|
|
||
|
|
||
| # Randomly initialized RWKV-7 decoder with custom config. | ||
| model = keras_hub.models.RWKV7Backbone( | ||
| vocabulary_size=10, | ||
| hidden_size=512, | ||
| num_layers=2, | ||
| head_size=64, | ||
| intermediate_dim=1024, | ||
| dtype="float32" | ||
| ) | ||
| model(input_data) | ||
| ``` | ||
| """ | ||
|
|
||
| def __init__( | ||
| self, | ||
| hidden_size, | ||
| head_size, | ||
| num_layers, | ||
| vocabulary_size, | ||
| intermediate_dim, | ||
| gate_lora=128, | ||
| mv_lora=32, | ||
| aaa_lora=64, | ||
| decay_lora=64, | ||
sachinprasadhs marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| dtype=None, | ||
| dropout_rate=0, | ||
| **kwargs, | ||
| ): | ||
| # === Layers === | ||
| self.token_embedding = keras.layers.Embedding( | ||
| input_dim=vocabulary_size, | ||
| output_dim=hidden_size, | ||
| embeddings_initializer=rwkv7_kernel_initializer(), | ||
| dtype=dtype, | ||
| name="token_embedding", | ||
| ) | ||
|
|
||
| self.output_layer_norm = keras.layers.LayerNormalization( | ||
| epsilon=1e-5, | ||
| name="output_norm", | ||
| dtype=dtype, | ||
| ) | ||
| self.dropout = keras.layers.Dropout( | ||
| dropout_rate, | ||
| dtype=dtype, | ||
| name="dropout", | ||
| ) | ||
| self.rwkv_layers = [] | ||
| for i in range(num_layers): | ||
| layer = RWKV7_Block( | ||
| hidden_size, | ||
| head_size, | ||
| intermediate_dim, | ||
| gate_lora, | ||
| mv_lora, | ||
| aaa_lora, | ||
| decay_lora, | ||
| use_initial_norm=i == 0, | ||
| kernel_initializer=rwkv7_kernel_initializer(), | ||
| dtype=dtype, | ||
| name=f"rwkv_layer_{i}", | ||
| ) | ||
|
|
||
| self.rwkv_layers.append(layer) | ||
| self.head = keras.layers.Dense( | ||
| units=vocabulary_size, | ||
| kernel_initializer=rwkv7_kernel_initializer(), | ||
| use_bias=False, | ||
| name="head", | ||
| dtype=dtype, | ||
| ) | ||
| # === Functional Model === | ||
| token_id_input = keras.Input( | ||
| shape=(None,), dtype="int32", name="token_ids" | ||
| ) | ||
|
|
||
| padding_mask_input = keras.Input( | ||
| shape=(None,), dtype="int32", name="padding_mask" | ||
| ) | ||
|
|
||
| x = self.token_embedding(token_id_input) | ||
| padding_mask = ops.cast(padding_mask_input, dtype=x.dtype) | ||
| v_first = None | ||
| for rwkv_layer in self.rwkv_layers: | ||
| x, v_first = rwkv_layer(x, v_first, padding_mask) | ||
| x = self.dropout(x) | ||
| sequence_output = self.output_layer_norm(x) | ||
| sequence_output = self.head(sequence_output) | ||
|
|
||
| super().__init__( | ||
| inputs={ | ||
| "token_ids": token_id_input, | ||
| "padding_mask": padding_mask_input, | ||
| }, | ||
| outputs=sequence_output, | ||
| dtype=dtype, | ||
| **kwargs, | ||
| ) | ||
|
|
||
| self.num_layers = num_layers | ||
| self.head_size = head_size | ||
| self.hidden_size = hidden_size | ||
| self.gate_lora = gate_lora | ||
| self.mv_lora = mv_lora | ||
| self.aaa_lora = aaa_lora | ||
| self.decay_lora = decay_lora | ||
| self.vocabulary_size = vocabulary_size | ||
| self.dropout_rate = dropout_rate | ||
| self.intermediate_dim = intermediate_dim | ||
|
|
||
| def get_config(self): | ||
| config = { | ||
| "hidden_size": self.hidden_size, | ||
| "head_size": self.head_size, | ||
| "gate_lora": self.gate_lora, | ||
| "mv_lora": self.mv_lora, | ||
| "aaa_lora": self.aaa_lora, | ||
| "decay_lora": self.decay_lora, | ||
| "vocabulary_size": self.vocabulary_size, | ||
| "dropout_rate": self.dropout_rate, | ||
| "intermediate_dim": self.intermediate_dim, | ||
| "num_layers": self.num_layers, | ||
| } | ||
| base_config = super().get_config() | ||
| return dict(list(base_config.items()) + list(config.items())) | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,38 @@ | ||
| import pytest | ||
| from keras import ops | ||
|
|
||
| from keras_hub.src.models.rwkv7.rwkv7_backbone import RWKV7Backbone | ||
| from keras_hub.src.tests.test_case import TestCase | ||
|
|
||
|
|
||
| class RWKV7BackboneTest(TestCase): | ||
| def setUp(self): | ||
sachinprasadhs marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| self.init_kwargs = { | ||
| "vocabulary_size": 10, | ||
| "hidden_size": 16, | ||
| "num_layers": 2, | ||
| "head_size": 4, | ||
| "intermediate_dim": 32, | ||
| "gate_lora": 32, | ||
| "mv_lora": 16, | ||
| "aaa_lora": 16, | ||
| "decay_lora": 16, | ||
| } | ||
| t = ops.ones((2, 16), dtype="int32") | ||
| self.input_data = {"token_ids": t, "padding_mask": t} | ||
|
|
||
| def test_backbone_basics(self): | ||
| self.run_backbone_test( | ||
| cls=RWKV7Backbone, | ||
| init_kwargs=self.init_kwargs, | ||
| input_data=self.input_data, | ||
| expected_output_shape=(2, 16, 10), | ||
| ) | ||
|
|
||
| @pytest.mark.large | ||
| def test_saved_model(self): | ||
| self.run_model_saving_test( | ||
| cls=RWKV7Backbone, | ||
| init_kwargs=self.init_kwargs, | ||
| input_data=self.input_data, | ||
| ) | ||
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.