television_onesettings_one

Blog

← All posts

AI and wealth: what is in it for you

AI and wealth: what is in it for you

AI and wealth (and what it means for our financial future)

I have been thinking about how AI could shape people’s financial future, and in my opinion San Francisco is a sharp case study.

San Francisco—famous for tech booms and busts—is in another shift, this time driven by AI. What I’m seeing looks like a new K‑shaped economy: not just “haves” and “have‑nots,” but “haves” and “have‑mores.” [S1]

Why this matters financially (for regular people)

1) Ownership (and the upside) is concentrated

In my opinion, the biggest financial risk starts with who owns the most capable AI systems. In practice, the strongest LLMs—and the data and distribution around them—tend to sit inside large organizations with capital and proprietary data advantages. When the tools that raise productivity are centralized, the gains can flow upward first—and stay there for a long time. [S1]

Tradeoff: we get rapid progress and powerful tools, but fewer people share in the ownership upside. [S1]

2) AI is replacing parts of white‑collar work (including programming)

In my experience, the fear isn’t abstract—it’s about bargaining power. If AI makes some tasks cheaper and faster, a lot of roles can become easier to substitute, especially in office work and programming‑adjacent jobs.

Even within tech, the gap is widening:

  • Reports cite OpenAI and Anthropic offering base pay $40,000–$85,000 above comparable roles at traditional Big Tech firms. [S1]
  • Senior AI engineers can earn around $325,000, compared with about $265,000 for similar roles at Apple. [S1]

Tradeoff: a smaller set of AI‑specialist roles gets a big income boost, while many adjacent roles face more competition and weaker negotiating power. [S1]

Spillover: housing and cost of living (where this becomes unavoidable)

That money is already reshaping housing. Rental brokers report AI employees with $35,000–$40,000 in monthly income taking apartments at above‑asking prices, pricing out other qualified applicants. [S1] I think this is one of the clearest ways “AI wealth” turns into “everyone else pays more.” [S1]

For the general public: how people can lose financially

In my opinion, the risk isn’t only “will I get laid off?” It’s also:

  • Wages flattening in roles where AI makes output easier to substitute
  • Fewer paths to ownership upside if the most valuable models, data, and distribution stay inside a small set of large organizations [S1]
  • Higher living costs in places where AI compensation concentrates and spills into housing [S1]

What failed (and what I’m trying instead)

In my experience, a lot of the public conversation failed by staying vague—“AI will help everyone”—while the visible outcomes are uneven: layoffs on one side, outsized pay on the other, and higher housing pressure in the middle. [S1]

Another thing that failed: assuming “the code” is the moat. We’re already seeing how AI can support “clean room” cloning that sidesteps traditional copyright protections, which makes it harder for smaller teams to defend their work and capture value. [S1]

What’s the right thing to do—and how do you prepare?

I don’t think there’s a single answer, but here’s what I think is practical given what’s already happening:

  • Assume uneven outcomes and plan accordingly. In my opinion, “prepare” means expecting job pressure in some white‑collar work (including programming), while a smaller set of AI‑specialist roles gets rewarded. [S1]
  • Build defensible value if you’re building. In my experience, the most concrete path is: proprietary data, continuous data enrichment, and value beyond code (services, integrations, support), because cloning risk is real. [S1]
  • Be honest about tradeoffs. If AI raises productivity but concentrates ownership and pay, pretending it’s evenly shared makes trust worse, not better. [S1]

The path to AI democratization (what I’m watching for)

I think “democratization” has to mean more than access to a chatbot. If the financial upside stays concentrated, the social tension won’t go away. [S1] To me, democratization needs two things:

  1. Broader access plus real accountability (not just “anyone can use it,” but “the impacts are visible and governed”). [S1]
  2. AI that works with humans, not around them. In my opinion, that’s the mission: bring AI and humans together in ways that benefit the world—especially through better human‑AI interaction and emotion‑aware systems that prioritize connection and trust, not sterile perfection. [S3]

BTW, I’ve been focused on building AIHumanity work that helps AI understand humans and improves human‑AI interaction. This topic keeps coming up for me: if AI changes who earns, who owns, and who gets priced out, we should talk about it in financial terms—not just tech terms. [S1]