seeker in japaneesse
Work is paused here and I really want to finish it off in next couple of months. Focusing on few different things at the moment.
- tankyu-sha is is an AI assisted personal digest tool.
- the end goal is to keep an eye over a bunch of stuff over the internet without having to go to the websites.
The overral system can be seen here plan.md, though it's not updated in a while. will fix that
- mise
- ollama
- have both
llama3.2:3bplusnomic-embed-textmodels, but they can be changed via config.json - chrome or
bunx playwright install chromium
- run
mise installto get all deps mise run config:initmise run db:createmise run db:migrate- run
bunx playwright installto install chromium <- rn it tries to get system app, has to be configurable - now we can run
gleam runto start the main app - run
bun run devto start the web app, eventually it'll be served as part of the main app - visit
http://localhost:3000/tasks
- flatten all actor messages to separate actors <- rn all of them are blocked even though we can move ahead
- actor pool for all actors
- keep playwright instance alive and every request is a new conetxt + tab or reuse same context
- Kill playwright service after cooldown period
- finish search source
-
expose all config options via UIIt's exposed viapriv/app_config.jsonnow, usemise run config:initto generate it- summarry model, embedding model, or maybe a model per run
- chromium location <- useful when running outside docker
- add courier actor to send digest
- some how add a setup to monitor system memory load and throttle LLM ops + browser ops
- add UI for home page task + source creation
- add UI for task list
- add UI for task_runs -> source_runs
- write a Dockerfile to distribute this
