☆ Yσɠƚԋσʂ ☆

  • 3.11K Posts
  • 1.37K Comments
Joined 6 years ago
cake
Cake day: March 30th, 2020

help-circle








  • Ultimately, these things aren’t concrete plans, it’s just a conversation starter. The people who published it aren’t building anything, but it does provide a starting point for things to think of those of us who do build things. The parts I thought were meaningful were in the list at the end:

    • Private: In the era of AI, whoever controls the context holds the power. While data often involves multiple stakeholders, people must serve as primary stewards of their own context, determining how it’s used.
    • Dedicated: Software should work exclusively for you, ensuring contextual integrity where data use aligns with your expectations. You must be able to trust there are no hidden agendas or conflicting interests.
    • Plural: No single entity should control the digital spaces we inhabit. Healthy ecosystems require distributed power, interoperability, and meaningful choice for participants.
    • Adaptable: Software should be open-ended, able to meet the specific, context-dependent needs of each person who uses it.
    • Prosocial: Technology should enable connection and coordination, helping us become better neighbors, collaborators, and stewards of shared spaces, both online and off.

    I think these are all good things to strive for.



  • The goals they state seemed perfectly reasonable to me. I don’t really see any contradiction with hyper-personalized computing and having thriving communities. I think it would be great if you could easily tailor your computer towards your workflow. It doesn’t mean that I’m not able to have shared interests with other people who have different flows.

    In fact, I think the way modern applications are build is fundamentally wrong precisely because they couple the logic of the app with the UI. This is the reason we can’t compose apps the way we do command line utils. If apps were broken up into a service and frontend component by default, you’d be able to chain these services together to build highly customized workflows on top of that.

    And that’s precisely the kind of thing AI tools are actually decent at doing. You can throw a bunch of API endpoints at it and have it build a UI using them that does what you want, or if it’s good enough you might not even need a UI, you can literally just type what you want and it’ll figure it out.















  • The main approach right now is just doing reinforcement learning and minimizing operational context for the model. Another more expensive approach is using a quorum as seen here. You have several agents produce a solution and take the majority vote. While that sounds expensive, it might actually not be that much of a problem if you make smaller specialized models for particular domains. Another track people are pursuing is using neurosymbolic hybrid approach, where you could the LLM with a symbolic logic engine. This is my personal favorite route, and what I did with matryoshka is a variation on that. The LLM sits at the very top and it’s job is to analyze natural language user input, then form declarative queries to the logic engine, and evaluate the output. This way actual logic of solving the problem is handled in a deterministic way.


  • Thanks, honestly I’m really shocked it took people this long to realize that keeping state would be useful. The way MCP works by default is completely brain dead. And agree about RAGs, they’re good for biasing the model, but that’s about it. My thesis is that you don’t even need reinforcement learning, you can have a symbolic logic engine that the LLM drives instead. The job of the LLM is to parse noisy outside inputs like user queries, and to analyze the results to decide what to do next. The context of the LLM should focus on this, I have a task, and I make some declarative queries to the logic engine. The engine then takes over and does actual genuine reasoning to produce the result. The key failing of symbolic AI was the ontological problem where you had a combinatory explosion trying to create the ontologies for it to operate on. But with LLMs we can have it build the context for the logic engine to operate within on the fly. And you can even do code generation in this way!