For now our only customer-facing UI is a ChatGPT-style LLM chat interface so the initial agents should be ones that work well for chat and where retrieval-augmented generation (RAG) is helpful. For example a chat agent specializing in a newer version of a code library where a standard LLM’s training data would be outdated.
Where we differ from OpenAI is the time we spent building support for developer extensibility, enabling developers to write agent plugins that upgrade agents with custom functionality. This is badly documented now at https://openagents.com/docs and will be explained better in our new docs next week.
The general idea is that any developer who sees opportunity to extend the functionality of OpenAgents into a new market should earn rev-share on anything that enables that new functionality, whether at the level of an individual agent, or a plugin or capability shared by multiple agents, or a system-wide integration added to our core codebase. A lot of those details we’ll need to figure out ad-hoc and build systems for over time.
That’s a long way of saying we expect to see OpenAgents encompassing all of the above and more.
If I had a predict a major theme for new agent functionality in the short term, I’d say we’ll see a lot of focus on automating code generation like what you start to see with Devin/OpenDevin. Partly because we want to use codegen agents ourselves!
One thing we hope to see in this SN community is discussion between potential buyers and creators of agents. Our core development efforts will focus on building any missing primitives or integrations needed to enable agent builders to satisfy market demand identified here.
Agents “that actually work” are still largely uncharted territory and will need some community discovery to build the market.
this territory is moderated