pull down to refresh

the latter: use human judges. read it all. form an opinion. read it again. Just have to get high quality judges.
For my weekly AI content aggregation post I discover content by:
  1. Anyway reading everything posted to the territory as it comes in
  2. On Monday reading everything I missed (i.e. on other territories that didn't make it to hot or while I was busy/asleep)
  3. Filter out slop if any (not much)
Then I "pre-judge" by:
  1. sort everything by comments first, to see what got most engagement, maybe pick some stuff that is interesting
  2. sort everything by sats (not zaprank), to see what got most upzapped
Then I decide what to highlight by:
  1. finding posts that I personally think match nicely together, especially if they're arguing opposing sides or can be narrated as a reinforcement of a point.
Wow didn't realize you put so much work in those summaries. I even at some point cheaply assumed you were using AI to curate them...
Appreciate the work!
reply
If I were using an AI I'd do it like this:
  1. Query GraphQL for every new post
  2. Extract the post, the link if there, the comments
  3. Use an LLM to: summarize the post, the underlying article and the discussions
  4. Use NER to: extract named entities
  5. Populate a graphdb that links all named entities across all articles
  6. Read every summary instead of every linked article
  7. Continue at 4 above, lol.
reply
That's very specific. Hmmm, maybe you are using AI~~
reply
lmao. No but I'd code it like that if I had time to code it.
Can't leave this to some LLM to code, it will cost me even more time.
reply
I like this approach. Algos assist in surfacing items, but the final judgment is made by a human, with added value by grouping like topics together
reply