Applied Gen-AI – From Hackathon to Enterprise Scale

Comparative playbooks from Citi, Microsoft and Seagate with ROI, guard-rails and scale.
Wednesday, October 15, 2025
Track
Impact
Three enterprise leaders compare how they moved Gen-AI from promising pilot to dependable utility. Expect a frank discussion on data foundations, governance and measurement – from Citi's CitAI prompt libraries and privacy ring-fences, to Microsoft's telemetry-led adoption model, to Seagate's Copilot roll-out and data-mesh upgrade. The moderator pushes for numbers, trade-offs and pitfalls so you leave with concrete options to scale safely. We will surface what to build versus buy, how to avoid pilot elitism, the thresholds that justify licence expansion, and where bias testing and red-teaming fit into release cycles. You will hear what changed behaviours, what stalled, and how each team isolated the AI signal from noisy productivity baselines.
This session will explore
  • Data and stack foundations: secure vector stores or feature layers, data-mesh upgrades, and role-based redaction for regulated functions.
  • Adoption engines: prompt-library governance, hackathons, cohort waves and telemetry loops to turn curiosity into daily depth.
  • ROI models that Finance respects: cost, productivity and risk capital – numerator and denominator choices, and isolating AI signal from noise.
  • Governance that scales: Responsible-AI councils, bias and privacy gates, red-teaming, plus Works-Council and regulator engagement.
  • Operating-model contrasts: Citi's in-house CitAI, Microsoft's product-owner leverage, Seagate's enterprise Copilot plus custom mesh.
  • Licensing strategy and cost control: avoid pilot elitism, set thresholds for enterprise-wide licence expansion.
  • When to retire dashboards: criteria for sunsetting legacy reports once AI summaries meet trust thresholds.
Learning objectives
  • Map your organisation to one of three Gen-AI operating models and understand the trade-offs.
  • List the minimum data and platform components required to move beyond pilots safely.
  • Apply two adoption levers (e.g., hackathon plus cohort telemetry) to accelerate real usage within 90 days.
  • Build an ROI frame that satisfies CFOs – spanning cost avoidance, productivity and risk reduction.
  • Assemble a governance checklist covering bias tests, privacy filters, red-team rhythms and approval pathways.
  • Set clear thresholds to deprecate legacy dashboards in favour of trusted AI summaries.

Why this is on the agenda

Boards and CFOs are asking for provable returns from Gen-AI investments while regulators press for guard-rails on privacy, bias and model risk. Large enterprises must integrate fragmented data, control licensing costs and shift behaviours at scale. Banking, technology and manufacturing face common pressures but different constraints – making cross-industry operating models valuable to compare. Budgets tighten as AI tools proliferate across knowledge work.