TIG and Agents

The Spaces focused on Tig’s upcoming agentic intelligence initiatives and a novel “mother of all datasets”: codified, practitioner-level know‑how for inventing better algorithms. John explained why this tacit knowledge is rarely written down, why algorithmic improvement is the fastest path to AI self‑enhancement (faster than hardware cycles), and how LLMs’ broad context plus market incentives can make such notes legible and valuable. He argued that crowdsourcing this long‑tail of non‑correlated strategies creates an open‑source advantage because big AI firms don’t possess this dataset. Updates included the Cambridge/Vitalik backstory, Tig’s halving schedule, alignment with Sam Altman’s statement that OpenAI’s edge is algorithmic innovation, and Tig’s prior state‑of‑the‑art result on the quadratic knapsack problem—evidence that crowdsourcing can accelerate algorithms. A forthcoming non‑convex optimization challenge is on testnet. Technical issues cut Q&A short; weekly lottery winners were announced.

TIG Community Call: The "Mother of All Datasets," Agentic Innovator Agents, Halving, and Ecosystem Updates

Participants and context

  • John (TIG founder/research lead; Cambridge PhD in Mathematics & Theoretical Physics; founder of the Cambridge Crypto Society). He is consistently referred to as “John.”
  • Host (TIG community lead; alias likely “Bacon” and possibly “Sparta” mentioned early in the call). The host manages community calls, comms, and giveaways.
  • Note: The session experienced recurring Twitter Spaces audio delays and disconnects; Q&A could not be conducted.

Anecdote: John, Vitalik, and Cambridge origins

  • A photo of John with Vitalik Buterin (circulating on Discord/Telegram/Twitter) is authentic. It was taken at Cambridge (King’s College Chapel visible in the background), likely around 2014.
  • John founded the Cambridge Crypto Society circa 2012–2013 (one of the earliest after MIT’s Bitcoin society). Vitalik gave the inaugural talk and visited multiple times pre-Ethereum launch (pre-2015).
  • John discussed the embryonic idea that became TIG with Vitalik back then; it took years to resolve the core mechanism to prevent naive implementations collapsing into centralization.
  • Vitalik was described as down-to-earth, engaged, and “normal for Cambridge” (jokingly noting World of Warcraft interest as unsurprising in a math department).

Core topic: The "Mother of All Datasets" — codifying tacit know-how for algorithm invention

  • Premise: There exists a vast, critical dataset that big AI companies do not currently have—uncodified know-how for inventing new algorithms (strategies for progress, getting unstuck, navigating dead ends).
  • Why this matters:
    • Algorithm invention is the fastest route to recursive self-improvement. Hardware-led improvement loops are measured in years (design → fab → deployment), whereas algorithmic improvements can feed back in milliseconds.
    • The AI that can invent better algorithms for itself is the one most likely to achieve escape velocity toward AGI/superintelligence.
  • Why this dataset isn’t already written down:
    • The real process of invention is messy: wrong turns, dead ends, backtracking, “unsticking” tactics. Published papers are clean narratives that omit the process.
    • Writing the full process is costly, slows invention, and mixes implicit reasoning not fully externalized.
    • Crucially, these strategies are highly context-dependent (background, preferred techniques, prior knowledge). Even if written, they often won’t be meaningful to another human without identical context.
  • Why it can be useful now:
    • LLMs/AIs possess vast background knowledge and broad context windows. This increases the likelihood that idiosyncratic, context-heavy write-ups become interpretable and actionable to AI—much more so than to another human.
  • Market and mechanism design:
    • Codifying tacit know-how is high effort; it needs strong incentives. A market mechanism should allocate more rewards to contributors whose know-how proves useful for algorithm development.
    • The valuable supply is long-tail and diverse: when stuck, what helps most is a non-consensus, uncorrelated tactic you haven’t yet tried. This means many practitioners (not just top-tier lab employees) have unique, useful strategies.
    • Centralized labs cannot scale to hire the world; only decentralized crowdsourcing can capture the breadth and long tail of strategies.
  • Agentic Innovator Agents:
    • TIG is preparing to launch “agentic intelligence” initiatives (innovator agents) that will:
      • Aggregate and structure these tacit strategies.
      • Price and incentivize contributions.
      • Apply them to algorithm invention workflows.
    • Documentation/announcement slated for “tomorrow”; a two-part essay series by John frames the concept (Part 1 is out; Part 2 imminent).
  • Provenance and inspiration:
    • John cites a related idea discussed by a Fields Medalist from Cambridge named in the call as “Kim Gauss” (this likely refers to Sir Tim Gowers). In a recent video, he argued AIs lack the “unsticking” strategies for mathematical proofs that aren’t in the literature.
    • John generalizes the insight from proofs to algorithm design and AGI, linking explicitly to recursive self-improvement—an angle not widely addressed.

Host perspective and examples

  • Tacit know-how in practice: The host described watching his musician girlfriend compose in a DAW. She “just knows” what to do but cannot effectively explain her method; repeating instructions doesn’t transfer proficiency. This mirrors how inventive processes resist straightforward codification.
  • Call to comprehension and amplification:
    • Read both essays. If concepts feel abstract, use GPT‑5 (noted as newly released) to explain the essays simply; then help explain and promote to broader audiences. A Family Guy-style explainer video is planned.

Strategic implications: Open source can win

  • John’s thesis: Because Big Tech lacks this dataset, a decentralized, in-situ knowledge base creates a unique strategic advantage for open source—if built before centralized scraping captures it.
  • Many assumed Big Tech already had all relevant data; John argues this unrecognized gap means it is not “already over.” This is an unexpected opening; coordinated action can capitalize on it.

Alignment with industry signals: Sam Altman on the primacy of algorithms

  • Sam Altman recently stated OpenAI’s key competitive advantage is how quickly and consistently they develop better algorithms.
  • John’s analysis:
    • Transformative AI leaps (e.g., attention/transformers) were algorithmic, not primarily hardware or data.
    • Hardware cycles are slower; going forward, performance gains will overwhelmingly come from algorithms.
    • Openness matters: OpenAI often keeps outputs closed; DeepMind publishes papers but not full code. TIG argues for open algorithms and implementations.
  • TIG’s position vs. labs:
    • TIG does not compete with OpenAI; instead it enhances any lab’s ability to improve algorithms by tapping the world’s cumulative expertise via market incentives.
    • At sufficient token price levels, it could even become more profitable for labs to share otherwise private algorithms on TIG.
  • Evidence TIG’s model works:
    • TIG’s community achieved a new state of the art on the Quadratic Knapsack problem (~1 year), a classical, widely applicable optimization problem (e.g., computational biology). This validates crowdsourcing plus aligned incentives.

Tokenomics update: Halving

  • Definition: Emission rate halves, reducing token inflation/new supply.
  • TIG’s schedule: Tranche durations double over time. Early phases included a 6‑month tranche; the most recent halving marks the end of a 1‑year tranche; next halving in 2 years, then 4 years, etc.
  • Implications: If demand is stable or rising while supply issuance halves, price tends to be positively impacted (cf. Bitcoin halving case studies). John deliberately avoided explicit price statements for legal prudence.

Upcoming challenge: Non‑convex optimization (teaser)

  • Non‑convex optimization is central to modern AI training and inference. An upcoming TIG challenge has been in long development and is now on testnet.
  • Discussion was cut short due to John’s audio drop; details to come in a future call.

Operations, community, and housekeeping

  • Technical issues: Significant latency/disconnects; Q&A postponed to next week.
  • Lottery winners: “butterflee” and “defi satay Baas” (handles as heard). Winners should DM the host.
  • Next steps and calls to action:
    • Watch for tomorrow’s announcement/documentation on agentic innovator agents.
    • Read John’s two-part “Mother of All Datasets” essays (Part 1 published; Part 2 imminent). Use LLMs to aid comprehension and help explain broadly.
    • Prepare to contribute your own tacit algorithm-invention know‑how as TIG rolls out capture and incentive mechanisms.

Key takeaways

  • The decisive missing ingredient for AGI may be a decentralized corpus of tacit strategies for algorithm invention—still largely in experts’ heads.
  • TIG’s agentic, market-aligned crowdsourcing can capture, price, and apply this “mother of all datasets,” creating a real competitive edge for open source before centralized labs can.
  • Algorithmic progress—not hardware—has driven and will continue to drive the largest performance gains; TIG has already demonstrated practical effectiveness with Quadratic Knapsack.
  • The halving reduces token emissions per schedule; macro effects depend on demand dynamics, with Bitcoin offering historical reference points.