pull down to refresh

Imagine on iPhone Apple Intelligence or on Android something Ollama-esque that was actually good. Private. On device. Every question queries the relevant data via API/SQL from your phones data. From your notes, journal, photo library, health data, chats, emails. And uses them just like e.g. chatGPT uses websearch.
Would you use it? Would you take a pic of a restaurants menu and a year later ask your phone what kinds of Burgers they had? Would you journal down random bits of information throughout the day and when necessary ask your new second brain stuff you wanted to remember?
Or would you not use it and continue using the device the conventional way?
26 sats \ 0 replies \ @sox 4h
Sometimes I use Apple Intelligence proofread feature as a free Grammarly on my phone. It helps me solve some grammar dilemmas. That's it.
reply
105 sats \ 2 replies \ @optimism 9h
Yes I would, but I don't think I'd like it in the way you're describing. Instead I'd want to augment specific processes: not give the model access to everything, but instead, allow everything to access the model. This may feel counter-intuitive, but I see this as "multi-modal LLM" being a (permissioned) API with a service worker behind it, just like camera or microphone.
For example:
  • Amber doesn't need LLM, so that doesn't need the permission.
  • Obsidian could use LLM, so that does need the permission, optionally, and when I enable it, it will use it.
This can then be extended to have also a knowledge cache in the same way, so that an app (not a centralized process) can submit new knowledge (for processing and then caching) and query it, much like your "second-brain" idea:
You made a picture of a menu and allowed the "knowledge" it contained to be added that cache last year, and then when you make a picture of the menu for the same place this year it will tell you that your fish taco is now only 3k sats instead of 10k, but you also get only 1 instead of 2 for that money.
reply
not give the model access to everything, but instead, allow everything to access the model. This may feel counter-intuitive, but I see this as "multi-modal LLM" being a (permissioned) API with a service worker behind it
Interesting.
I mean both approaches would be behind a "safe" API. But I hadn't thought about whether it would be nicer if the application uses the LLM as an API or the LLM retrieves information through an API.
Is one of these inherently more powerful than the other? Is one inherently safer than the other? If so, which way and why?
reply
38 sats \ 0 replies \ @optimism 7h
It feels to me like the llm-to-app interface is both more powerful and riskier than app-to-llm, but, app-to-llm is easier to both standardize and optimize. I think it really depends on what you want to achieve.
There was a nice post coming in via HN this Monday, #1057610, that basically discusses that chatbot interfaces suck. I subscribe to that thought and feel that prompt writing equals inefficiency, but it's how the LLMs are trained: to be a chatbot, a companion.
However, I believe, like the author of that article, that the better application of the technology is not interactive, but a background task incorporated into the process, rather than besides it.
If you want a chatbot, the mechanism I propose will probably hinder adoption because it requires adoption per-app. It's always cheaper to just circumvent everything and not ask for permission, but then you will quickly run into shenanigans like #1052744. I'd really not want any unchecked capability that can do this on any of my devices, so the slower adoption is imho worth it. 1

Footnotes

  1. One of my favorite things nowadays is that I get a "DCL attempted by <bad app> and prevented" message from GrapheneOS, just like I've always loved SELinux, despite its complexity. It's always nice to have OS-level (and hardware) protections against naughty software.
reply
If it was actually private and genuinely useful like that, I’d 100% use it. The idea of having a second brain that actually remembers things I forget sounds amazing. But I’d definitely need to feel confident it wasn’t leaking my life to some server somewhere.
reply
You already can use LLMs that are 100% on device and private on desktop. So I have no doubts that will be an issue on Android. It just isn't computationally there yet on phones. And the whole infrastructure around it doesn't exist yet.
So I personally don't worry about the "actually private" part - more about the other parts
reply
5 sats \ 0 replies \ @rafa0x0 8h
esn't exist yet. So I personally don't worry about the "actually private" part - more about the other parts
That makes sense, and yeah I’m with you on the infrastructure side being the real bottleneck. Even if the models could technically run on-device, without the right OS-level support and clean integrations, it’s just clunky. I guess my hesitation is more about how companies might market it as private while still quietly shipping data off-device. But once it’s truly local and usable, I’m all in.
reply
actually that is terrifying if you think about it this way. but i think if something is good doesnt nesseceraly means it is better at some point like we sacrifice something in order to gain an aspect you know. people nowadays rely on AI mor than any era before it is the ependencies trauma that takes over.
reply
I'm not sure I can follow. What is terrifying?
reply
like the fact that our devices note every detail of our life aspect that we don't give that much awareness. from the markets that bombards it with ads and taking into consideration the contents or the feeds we follow. that what teriffies like knowing the version that we barely know.
reply
In case english isn't your first language, can you repeat this in your mother tongue? Because those are not grammatically correct sentences and I don't understand
reply
yeah it is not my 1st language, what i was trying to say is that It's like how our devices record every detail of our lives something we’re often barely aware of. From the markets that bombard us with ads to the algorithms that take into account the content we follow, it all adds up. What terrifies me is the version of ourselves that’s being created one we barely even recognize or know. i have revised the version i sent so basically that is kind of more clear to understand
reply
5 sats \ 1 reply \ @RamPl 7h
If it were truly private and stayed 100% on-device, I’d absolutely use it. A second brain that actually remembers things I’ve seen or written sounds like a superpower. The key would be trust if I knew it wasn’t phoning home to Apple or Google, I’d start journaling, scanning menus, saving voice notes, and fully leaning into the idea. But if it’s just another cloud-bound “smart” feature with a pretty UI, no thanks.
reply
Do you think you could get tired of it and return to using the phone the conventional way 100% of the time once the novelty wears off?
reply
5 sats \ 1 reply \ @claos545 7h
Yeah, I'd absolutely use it - if the privacy guarantees are legit and it's truly on-device. The idea of turning your phone into a personalized second brain that can actually reason over your own data feels like what smartphones were always meant to become. Not just a filing cabinet, but an active memory assistant.
reply
Do you think you could get tired of it and return to using the phone the conventional way 100% of the time once the novelty wears off?
reply
to start with, I am not OK with AI scraping through my photos or texts. I am avoiding this as strong as I can. maybe for some super basic tasks that can be done while driving, it would be great. so my answer would be it depends on what it can do and more importantly on how it protects me and my data.
reply
to start with, I am not OK with AI scraping through my photos or texts.
Even when it's 100% on-device?
reply
AI - 100% on device is no longer AI... or better said it's a limited AI. can this be possible? to have a limited AI? if so, it would be great.
reply
Have you ever used an 8b or 27b LLM on your personal device? Yes or no?
reply
reply
5 sats \ 1 reply \ @Grateful 9h
Depends how intrusive it is and how much privacy and data you further compromise.
reply
and how much privacy and data you further compromise
Assuming it's 100% offline on-device
reply
Can't you organize yourself and think for yourself? Do you really need to feed a tool that is there to subjugate you through algocracy? It is well known that people prefer comfortable prisons to freedom. It's incredible that they also want to feed the dogs that will hunt them down to death.
reply
Have you never searched for a very old email but the search function gives you dozens of emails about the topic? Wouldn't it be nice to just ask about the information with the phone searching it in a "smart" way instead of keywords.
Have you never searched for a picture in your library when it would be much faster if you could describe it in words to the computer?
Have you never been invited to a BBQ at your neighbor and thought "shoot, last time he told me where he went to Univesity, if only I could remember where it was for smalltalk later". Wouldn't it be nice if you had written it down in your Journaling App and now you could just ask your phone
reply