Ideas Worth Exploring: 2025-03-11
- Charles Ray
- Mar 11
- 2 min read
Updated: Mar 12
Fully In-Browser Graph RAG Ideas with Kuzu-Wasm

The article discusses the development and demonstration of a fully in-browser chatbot that utilizes Graph Retrieval-Augmented Generation (Graph RAG) to answer natural language questions about LinkedIn data using WebAssembly (Wasm) versions of Kuzu and WebLLM. This application is designed to run entirely within users' browsers, providing privacy, easy deployment, and speed benefits. The chatbot follows a three-step process: the user asks a question, the question is converted into a Cypher query that retrieves relevant data from the graph database (Kuzu), and the original question is given to the LLM along with the retrieved context to produce an answer. The limitations of resources in the browser restrict the sizes of application components, such as the size of the LLM that can be used. A live demo and source code for the project are available online. The article also anticipates future improvements in performance due to advancements in WebGPU, Wasm64, and smaller
Evolving Agents Ecosystem
A toolkit for agent autonomy, evolution, and governance. Create agents that can understand requirements, evolve through experience, communicate effectively, and build new agents and tools - all while operating within governance guardrails.

The EAA aims to revolutionize AI agent systems by enabling autonomous agents to build and improve themselves without human intervention. Key features include intelligent agent evolution, agent-to-agent communication, a smart library with semantic search, self-improving systems, human-readable YAML workflows, multi-framework support, governance through firmware, semantic decision logic, and a service bus architecture for centralized infrastructure management (experimental, coming soon). The toolkit builds upon existing frameworks like BeeAI, focusing on agent autonomy, evolution, and self-governance to move toward truly autonomous AI systems that can improve themselves while remaining within safe boundaries.
VideoPainter

VideoPainter enables plug-and-play ideas that are text-guided video inpainting and editing for any video length and pre-trained Diffusion Transformer with masked video and video caption (user editing instruction). VideoPainter contains 1) an efficient and plug-and-play dual-branch framework featuring a lightweight background context encoder, and 2) a ID resampling technique for inpainted region ID preservation.
As a generative model, VideoPainter may occasionally produce unexpected outputs due to several limitations: (1) Generation quality is limited by the base model, which may struggle with complex physical and motion modeling, and (2) performance is suboptimal with low-quality masks or misaligned video captions. We're actively working on an improved version with enhanced datasets and a more powerful foundation model.
smalldiffusion

A lightweight diffusion library for training and sampling from diffusion models. It is built for easy experimentation when training new models and developing new samplers, supporting minimal toy models to state-of-the-art pretrained models. The core of this library for diffusion training and sampling is implemented in less than 100 lines of very readable pytorch code. To install from pypi:
Comments