Here are three ideas I’m trying to concretize:

Trajectory Crafting

Idea: Use an embedding model to map a sliding window of generated tokens to analyze the trajectory of output.

Goal: If we could find a way to track where a path is leading, we can steer it, and we can better quantify the effect of steering.

Difficulties:

  1. We need a meaningful set of basis vectors to create poles in embedding space to know where we’re going… Doing that feels complicated.
  2. We need a vocabulary of tokens that have high steering power and the different ways generation reacts depending on injection context

Potential Solutions:

  1. Embed “pole” trajectories and see if we’re moving closer to them via cosine similarity
  2. Continue searching for more meaningful vectors (are there ways to discover an ‘antonym’ vector?)

Software Evolution

Idea: Clarify what using a computer will look for a single end-user like in 10 years assuming everything is AI-ified (excluding art/culture)

Goal: Pre-prepare for potential “end states” of software

Deeper Description: I don’t really know what there would be left to do on a computer if we have fully “agentified” digital lives. Theoretically, you could just have a single page to ask natural language questions to and receive information back in whatever form you want. All services would be connected

Questions:

  1. Why do we use computers in the first place?
  2. What are the specific qualities of agent usage that we’re interested in?
    1. Blanketing complexities with natural language
    2. Intelligence / Analysis
    3. Unique-ness
  3. Why hasn’t the “hyper-individualized” software boom fully taken off?
    1. Is this changed by Opus 4.5?

Resource Allocation

Idea: Perhaps it’s just the Twitter bubble I find myself in, but why is not true that the only economically valuable resources in the AI revolution will be those specifically involved in the creation, transportation, and deployment of AI?

Goal: High ground