The year ahead 2026

The Spaces centers on a strategic shift away from politics toward technology, manufacturing, and rigorous quantitative analysis in 2026. The host (based in Oceania) describes building subscription-based independence via the Noetic Order and frames the year ahead as dominated by narrative engineering. Elon Musk briefly joins to outline 2026 goals: widespread robotaxi, Optimus v3, and Starship full reusability, plus lunar manufacturing of solar cells/radiators and mass drivers to deploy AI power satellites. Elliot argues hand-tuned kernel optimization will be overtaken by automated methods, sharing SRAM-heavy accelerator and NVMe-striping strategies to feed large MoE models. Omar ALX highlights Opus 4.5’s impact on end-to-end app creation and monetization. The discussion explores edge vs cloud latency, rising RAM prices, memory supply dynamics (Micron/SK Hynix/Samsung), and the “AI foam” thesis. Security topics surface (GPU memory leaks like LeftoverLocals; a high-severity MongoDB CVE), and broader macro/geopolitics appear: midterm volatility, Fed leadership shifts discussed, and China’s assertive posture alongside competitive manufacturing. The session closes with community-building and a move to Discord for deeper engagement.

Space discussion: year-in-review, 2026 outlook, AI/compute, space manufacturing, and creator economy

Cast of speakers (real names/handles where identifiable)

  • Adrian (host): Runs “Noetic Order,” lives in Oceania, analyst/commentator focused on narrative engineering, tech/manufacturing, and quantitative research. Avoids politics on X; monetizes via advanced member subscriptions; moves community to Discord post-space.
  • Elon Musk: Joined briefly; shared outlook on Tesla (robotaxi, Optimus v3), SpaceX (Starship, full reusability), and lunar/space-based manufacturing.
  • Elliot: Kernel/compute optimization engineer and educator (CUDA/GPU kernels); strong views on the near-term obsolescence of hand-optimization and on memory/throughput architectures for LLMs.
  • Omar (Omar ALX): Developer/content creator; building an autonomous car telemetry/database app; heavy user of cutting-edge LLMs (Opus 4.5) for full-stack acceleration and indie SaaS.
  • Other participants: Several technically savvy contributors discussed security (GPU memory leakage POCs, MongoDB CVE), databases, Apple/Intel/NVIDIA supply-chain dynamics, and cultural/AI meta-topics. Names not clearly stated in the recording.

Adrian’s 2025 reflection and strategic pivot

  • Personal “finance year”: Reassessed time as money; concluded that prior engagement patterns weren’t productive. Solution: build a community with aligned incentives via paid “advanced members” in Noetic Order to become independent of X revenue.
  • Doxxing and location: Now openly states he’s in Oceania; surprisingly found being doxxed “freeing” as it nullified incorrect narratives. Emphasizes his brand as quantitative analysis, dismantling legacy-media claims with public data.
  • Exit from politics: Stopped posting politics (since a February NZ trip). Rationale: politics ≈ “narrative engineering,” tribal and transactional like sports; institutions act in their self-interest; better to focus on constructive domains (technology, manufacturing, and even spiritual/intellectual depth rather than superficial iconography).
  • Community-building: Leans on Discord to convert parasocial attention into meaningful social interactions; runs daily sessions for subscribers; stresses etiquette and zero tolerance for spam.

2026 outlook: politics, macro, and geopolitics

  • Narrative-heavy cycle: Expects intense “narrative engineering” around U.S. midterms. Predicts a difficult cycle for Republicans (unpopular take), though contests will be close and historical interpretations divergent. Anticipates politicization seeping into unrelated domains (even big sports).
  • Visibility strategy: Plans to stay observational and analytical rather than engaging in political performance.
  • Trump and policy volatility: Characterizes Trump as highly transactional, leading to a volatile, “felt-but-barely-perceived” tech-geopolitical cold war. Predicts price-level whiplash that people will notice in everyday goods/services.
  • Federal Reserve: Notes Jerome Powell to step down as Chair (remain on Board); expects Fed to structurally protect rate-setting from “loyalist” interference. Near-term interest-rate optics could be tweaked for political optics, with a later rebalancing.
  • China: Expects more aggressive posture. Technologically, anticipates rapid catch-up in compute (estimates roughly ~2–2.5 years behind RTX 4060 class now), added competition in memory, and sustained asymmetric strategies. Concerned recent signaling around Taiwan shifts risk from “quiet asymmetric catch-up” to attention-raising brinkmanship, which may be strategically unnecessary and risky.

AI, compute, and memory markets: “foam,” not bubble

  • AI demand dynamics: Adrian rejects “AI bubble” framing and instead uses “foam”: many smaller, durable bubbles that persist and diffuse rather than a single pop. View: AI demand remains structurally strong; innovation in both the U.S. and China.
  • Memory crunch: Acute shortages in DRAM/NAND/NVMe; major vendors (e.g., Micron, SK hynix, Samsung) prioritize AI over consumer. Adrian and Elliot both track RAM/NVMe constraints and pricing spikes; Adrian speculates on memory-line repurposing to DRAM exacerbating NVMe scarcity.
  • Investment posture: Adrian favored Micron as a logical play; considered silver but excluded due to lower conviction. Omar avoids individual hardware equities due to supply chain/disaster volatility.
  • Edge vs cloud: Omar envisions “dumb” client devices tethered to powerful AI backends; Adrian counters that latency means significant compute must remain at the edge for acceptable UX. Both expect immense growth in effective compute delivered to users.

Elon Musk segment: Tesla, SpaceX, and lunar manufacturing

  • 2026 milestones Musk highlighted:
    • Tesla: Widespread robotaxi deployment; Optimus v3 launch.
    • SpaceX: Achieve full reusability with Starship; initial major payloads likely Starlink satellites; “space computer” constellation/ramp as a key accelerant.
  • Lunar industrialization concept:
    • Manufacture heavy items (solar cells, radiators) on the Moon, ship low-mass components from Earth.
    • Use a lunar “mass driver” (railgun-like launcher) to place vast tonnage into orbit annually.
    • Target: Scale to on the order of 100 terawatts of AI compute per year from lunar-sourced infrastructure.
  • Shielding and orbital realities: Musk was comfortable with magnetic shielding notions and pointed to operating experience with ~9,000 satellites; emphasized “get serious tonnage to the Moon, then send even more tonnage from the Moon.”

Developer productivity, LLMs, and GDP amplification

  • Opus 4.5 impact (Elliot and Omar):
    • Omar: Opus 4.5 reshapes the dev workflow—describe an idea, get a working implementation end-to-end; copy-paste errors back for auto-fix; near-term still requires informed oversight (security, correctness), but the bar to shipping is much lower. He’s already earning with a $99 app (hundreds of sign-ups). Predicts GDP changes when anyone can command “1,000 developer equivalents” cheaply.
    • Elliot: Models are already incredibly capable at codegen relative to wattage/compute; asks what the world looks like at 10–1,000× more coding compute. Foresees rapid expansion in scope and speed of software creation.
  • Cognitive hygiene: Several argued AI “amplifies what’s there” (skills or deficits). Smart users get multiplicative gains; naive users may ship insecure/brittle systems. Adrian and others caution against outsourcing all thinking to LLMs; recommend strategic cycles: use LLMs, step away, come back sharper.

Software optimization and the return of performance engineering

  • Cost pressure: Adrian expects rising compute costs to force serious optimization, reversing the trend of “sloppy” software (e.g., many Unreal Engine titles). Cites seeing exceptionally optimized UE title hitting very high FPS on high-end hardware, contrasting with typically bloated pipelines.
  • Microsoft’s challenge: Forecasts pressure on platform vendors (e.g., Microsoft) to make software lighter and more efficient.

Technical deep-dives: memory hierarchies, MoE loading, kernels

  • Elliot’s memory/throughput strategy for inference:
    • For MoE (e.g., ~1T parameters with ~32B active per token), you need to stage active experts at high bandwidth.
    • Approach 1: Groq-like SRAM-heavy architectures (massive on-chip SRAM; expensive but ultra-fast tokens/sec/user).
    • Approach 2: Stripe multiple high-end NVMe SSDs (e.g., four drives at ~15 GB/s each) via an adapter to reach ~60 GB/s sequential—approximating DDR5 bandwidth for sequential read-heavy inference. Critical caveats: this only works for sequential patterns; random access defeats the point. Understanding inference access patterns down to the byte is essential.
    • Warning: Don’t “binge on SSDs” without truly understanding access characteristics and writing the supporting software correctly.
  • Manufacturing constraints: Adrian notes DRAM and NVMe often share upstream fab resources; repurposing lines for DRAM can also tighten NVMe supply and pricing.
  • Kernel programming and AI automation:
    • Elliot predicts much low-level hand-optimization will be automated (evolutionary search/compilers/AI). He laments time sunk into books/publishing; still values having the mental model to reason about GPU behavior.
    • Example: He had Opus implement CUDA forward/backward for a novel layer; achieved ~5× faster training in minutes; code looked “highly optimized” even beyond his quick comprehension.

Security and infrastructure notes

  • GPU memory leakage POC: A participant referenced the “Leftover Locals” GPU leak and shared a repo demonstrating GPU memory leakage vectors; relevance to multi-tenant GPU inference/security.
  • MongoDB CVE: Mention of a recent CVSS 8.7 vulnerability with active exploitation; general caution on operational hardening. Additional commentary on MongoDB’s eventual consistency vs. transactional guarantees.

Supply chain, platforms, and chips

  • Apple, M-series, Intel foundry:
    • Debate on Apple’s long-standing engineering rigor (e.g., Apple historically exposed numerous Intel issues; M-series unified memory is class-leading). Discussion that government/market forces might push Apple to fab with Intel (18A) later; skepticism remains due to process maturity.
    • NVIDIA reportedly bought ~$5B of Intel stock; still, some participants said NVIDIA evaluation of Intel’s 18A wasn’t compelling and returned to TSMC.
  • NVIDIA strategy/culture (jokes and takes):
    • Jensen Huang lore, CUDA education, and whether NVIDIA truly wants external CUDA coders versus bringing talent in-house. Observations that Blackwell introduces a different programming model; Ampere/Hopper are “solved” from docs standpoint. Light-hearted ribbing about “ray-tracing keynotes” as a marketing superpower.

Cultural/meta threads: “narrative engineering,” psychosis, and memes

  • Narrative engineering: Central theme—institutions and companies pursue self-interest; optics and narratives are engineered and cyclical. 2026 will be a high-water mark for narrative manipulation.
  • AI amplifies traits: Group consensus that LLMs magnify existing competence or dysfunction; without discipline, you risk “LLM psychosis” (overreliance, context overflows, ungrounded ideation). Advocated periodic abstention and “progressive overload” for the brain.
  • “Mass driver” meme and Dyson swarm bitstreams: Running jokes about lunar mass drivers, renaming everything into triangles, and a speculative future where everything is delivered as AI bitstreams and edge devices are mere displays.

Highlights and takeaways

  • Space/compute thesis (Musk):
    • 2026 targets—Tesla robotaxis and Optimus v3; Starship full reusability. Longer-term: Moon as a heavy manufacturing base (solar/radiators), mass drivers enabling huge orbital payloads, scaling AI compute toward 100 TW/year sourced from lunar infrastructure.
  • Macro/geopolitics (Adrian):
    • Expect a volatile, narrative-heavy midterms year. Fed governance will aim to preserve rate independence. China is simultaneously accelerating in tech and becoming more assertive geopolitically; the “felt” cold war will mostly show up in prices and supply chains.
  • AI market structure: Not a single bubble but a durable foam of opportunities. Persistent memory constraints (DRAM/NVMe) and rising costs will force genuine software optimization and smarter memory hierarchies.
  • Dev productivity: Modern LLMs (e.g., Opus 4.5) already deliver end-to-end coding and debugging. Indie apps with real ARR are feasible for solo founders. Security and oversight remain crucial near term.
  • Edge vs cloud: Both will matter. Latency-sensitive experiences require edge compute, while “dumb terminals” to hyperscale AI will also proliferate.
  • Security: Keep a close eye on GPU isolation/memory leakage research and real-world database CVEs.
  • Community strategy: To convert viral spikes into durable engagement, move from parasocial feeds to structured communities (Discord), with clear norms and value for members.

Open questions to monitor in 2026

  • U.S. midterms: How do narratives, turnouts, and legal/institutional guardrails shape outcomes? How noisy are markets and prices during the cycle?
  • Fed leadership transition: How successfully does institutional design buffer policy from political interference?
  • China tech catch-up and Taiwan signaling: Does Beijing maintain asymmetric strategies or escalate? How do U.S. export controls/supply chains adapt?
  • Compute supply: Do DRAM/NAND constraints ease, or do space/AI expansions create new ceilings? Do SRAM-heavy accelerators (Groq-like) and MoE-optimized memory orchestration go mainstream?
  • Tesla/SpaceX milestones: Can Tesla deliver widespread robotaxi and Optimus v3 at scale? Does Starship achieve reliable full reusability, and does Starlink/“space computer” payload cadence inflect?
  • Lunar industry: Do any concrete steps toward lunar surface power/thermal manufacturing emerge (pilot projects, MOUs, payload manifests)?
  • Developer workflows: To what extent do LLMs subsume traditional compiler/optimizer roles? How do we standardize safe LLM use for secure, reliable software at scale?

Administrative

  • Adrian closed by moving the session to Noetic Order’s Discord for the daily subscriber program and reiterated community rules (no spam, firm moderation).