Day 2

Wednesday, June 10, 2026

Please note that all times listed are EST (Eastern Standard Time, -5 GMT).

Hero
There are no agenda items with this track

7:45 am

NETWORKING BREAKFAST: BUILD COMMUNITY CONTACTS

  • Start your day off right and connect with big data and analytics leaders.
  • Get to know your data peers and colleagues over a delicious breakfast.
  • Source practical tips, discuss best practices, and prepare for the day ahead.

8:45 am

OPENING COMMENTS FROM YOUR HOST

Gain insight into today’s sessions so you can get the most out of your conference experience.

 

9:00 am

OPENING KEYNOTE

The State of Data and AI in Canada: Current Constraints, Maturity Gaps, and the Road Ahead

Canadian organizations are accelerating AI adoption, yet most still struggle with foundational issues—fragmented data estates, unclear ownership models, talent shortages, rising regulatory pressure, and widening gaps between ambition and operational reality. Many have successful pilots but lack the governance, infrastructure, and cross-functional alignment needed to scale AI safely and economically. Align your organizational strategy with national realities to:

  • Diagnose the top maturity gaps in Canadian organizations across data quality, lineage, governance, talent, and model lifecycle management.
  • Prepare for the next five years of change—including emerging regulatory expectations, new assurance models, and the shift toward real-time, production-grade AI.
  • Walk away with a clear understanding of Canada’s current AI readiness—and a roadmap for closing the most urgent gaps before they become competitive liabilities.

 Drive strategy effectiveness through a clear mapping of the talent, regulatory and competitive landscape.

9:30 am

INDUSTRY EXPERT

Bridging the Knowledge Gap: Enhancing AI Understanding Across Your Organization

There remains a significant gap in understanding AI’s impact on business processes among general employees, posing a barrier to the successful implementation of AI projects. Foster a culture of AI literacy within your organization to empower employees and strengthen the effectiveness of AI initiatives. Source practical tips to:

  • Assess current employee understanding of AI and identify training needs.
  • Develop effective training programs to enhance AI literacy across all levels of the organization.
  • Promote a collaborative environment where employees feel comfortable engaging with AI technologies and contributing to project success.

Improve AI literacy to unlock its full potential within your organization.

10:00 am

ROUNDTABLES: DISCOVER THOUGHT-PROVOKING IDEAS

Take a deep dive down the innovation rabbit hole in one of our roundtable discussions. Share common challenges and best practices with your big data and analytics peers on a topic of your choosing:

  1. Prototypes to Production Pipelines
  2. Managing Data Quality at Scale in an AI-Driven World
  3. Balancing Privacy, Consent, and Innovation (AIDA, PIPEDA, Provincial Laws)
  4. Cross-Functional Data Product Teams: What’s Working and What’s Not
  5. Building a Culture That Supports Both Rigour and Experimentation
  6. Selecting and Rationalizing Tools in a Multi-Vendor, AI-Evolving Ecosystem

11:00 am

EXHIBITOR LOUNGE: VISIT BOOTHS & SOURCE EXPERTISE

  • Explore the latest big data and analytics technology and strategies with our industry-leading sponsors.
  • Share your challenges with the biggest innovators in the business.
  • Schedule one-to-one private meetings for personalized advice.

11:30 am

PANEL

TRACK 1: HIGHLY REGULATED, HIGH DATA SENSITIVITY

Regulation vs. Innovation: Building AI and Advanced Analytics Under Tight Controls

Highly regulated sectors must advance the use of AI and analytics while operating within strict frameworks like CPPA, AIDA, OSFI B-13, health-privacy laws, and stringent security standards. The challenge is driving innovation without increasing regulatory risk or slowing delivery timelines. Walk away with an action plan on:

  • Translating emerging regulations into practical architectural controls, including differential privacy, RBAC/ABAC, encryption in use, data lineage, and continuous policy enforcement.
  • Deploying RAG, multimodal models, and agentic workflows securely using private endpoints, VPC-only LLM access, and governed prompt flows.
  • Building governance and approval pathways that align compliance, security, and engineering without creating bottlenecks.
  • Implementing monitoring for bias, drift, hallucination risk, and explainability to meet audit and reporting requirements.
  • Establishing safe experimentation zones and guardrails that balance innovation with accountability.

 Bolster AI delivery in tightly controlled environments to maximize innovation while maintaining full regulatory compliance.

11:30 am

PANEL

TRACK 2: PUBLIC TRUST & MISSION DRIVEN

Interoperability, Transparency, and Trust: Overcoming Data Fragmentation in Public Service Delivery

Public-sector and mission-driven organizations face a unique challenge: delivering seamless, equitable, and trustworthy services across systems that are deeply fragmented, historically siloed, and governed by strict privacy mandates. As expectations for transparency and real-time responsiveness rise, agencies must find ways to share data responsibly, coordinate missions, and build public confidence — without compromising security or citizen rights. Develop a blueprint to:

  • Drive Interoperability in complex ecosystems: designing data-sharing frameworks across agencies, jurisdictions, and legacy systems without violating mandates, such as CPPA, PHIPA/FOIPPA, or sector-specific statutes.
  • Foster transparent, accountable data use: embedding auditability, lineage, consent, and explainability to demonstrate responsible handling of citizen and community data.
  • Mitigate fragmentation through modern architecture: leveraging APIs, data fabric layers, canonical data models, and real-time event-driven integration to enable coordinated service delivery.

 

Advance citizen trust by making it a core operating principle to design AI-assisted decisions that are explainable, auditable, and responsible.

11:30 am

PANEL

TRACK 3: REGULATED & COMMERCIALLY DRIVEN

Moments That Matter: Using Real-Time Intelligence to Personalize Every Customer Interaction

Competitive advantage now hinges on what happens the moment a customer clicks, calls, buys, switches channels, or signals intent. To deliver truly responsive and personalized experiences, organizations must move beyond batch reporting and static segmentation toward real-time, AI-ready decisioning. Integrate real-time analytics, unified customer data, and emerging AI capabilities to elevate every touchpoint — while maintaining trust and privacy. Take away specific solutions to:

  • Build a real-time data backbone using streaming pipelines, event-driven architectures, and low-latency feature stores.
  • Create holistic, consent-aware customer profiles that align with CPPA, CASL, and sector-specific privacy mandates.
  • Apply early AI-driven personalization techniques like contextual recommendations, intent prediction, and behavioural scoring.
  • Prepare for agentic personalization, where AI systems autonomously tailor interactions across digital and operational channels.

 Increase revenue channels with transparency, fairness, and explainability to protect customer trust.

 

11:30 am

PANEL

TRACK 4: INDUSTRIAL & OPERATIONAL

Operationalizing IoT and OT Data at Scale: Solving Integrity, Latency, and Integration Challenges Across Industrial Environments

As manufacturers race to modernize their plants, the surge of IoT sensors, industrial controls, and OT systems has created massive data volumes. Most of it is noisy, siloed, or too latency-sensitive for traditional data pipelines. Build reliable, real-time data foundations. From edge-to-cloud architecture decisions and protocol translation (OPC-UA/MQTT) to digital twin enablement and AI-driven predictive maintenance, operationalize OT and IoT data without disrupting production. Master the success factors to:

  • Ensure data integrity at the edge: filtering noise, handling sensor drift, and validating time-series accuracy.
  • Optimize architectures for low-latency industrial analytics: determining when to compute at the edge, cloud, or on-premises.
  • Converge IT and OT data safely: overcoming incompatible protocols, historian lock-in, and network segmentation.
  • Enable digital twins and predictive maintenance with cleaner, more consistent data.

Optimize industrial operations by integrating IoT and OT data to enhance decision-making and reduce downtime.

12:00 pm

CASE STUDY

TRACK 1: HIGHLY REGULATED, HIGH DATA SENSITIVITY

Breaking the Silos: How One Regulated Enterprise Unified Sensitive Data for Real-Time AI Without Violating Compliance

Many regulated organizations struggle to operationalize AI because their most valuable data— claims, patient records, transactions, investigations, KYC files, and risk assessments —remains locked in siloed, high-security systems with strict access controls and residency constraints. Deliver real-time AI and analytics by building a secure, compliant controlled-access data layer — without moving or exposing sensitive data. Achieve a step-by-step action plan to:

Implement a zero-copy, privacy-preserving architecture using technologies like secure data virtualization, in-place compute, and tokenized joins.

Enable real-time model inference on protected datasets through enclave-based processing, private endpoints, and fully auditable access paths.

Reconcile conflicting regulatory requirements across CPPA, AIDA, HIPAA/PHIPA, EMR constraints, OSFI B-13, and internal security policies.

Automate policy enforcement and dynamic data masking to ensure analysts, data scientists, and AI systems only see what they are allowed to access.

Reduce data delivery times from months to minutes while maintaining full lineage, risk classification, residency guarantees, and auditability.

 

Transform the patient experience with scalable AI to deliver insights faster while maintaining compliance and security.

12:00 pm

CASE STUDY

TRACK 2: PUBLIC TRUST & MISSION DRIVEN

One Citizen View: How a Public Agency Unified Identity, Consent, and Case Data to Transform Service Delivery

Public sector organizations are under growing pressure to deliver seamless, personalized, and equitable services — yet citizen data is often scattered across disconnected case systems, departmental silos, legacy registries, and incompatible standards. Without a unified view of individuals and households, agencies struggle with inefficiency, duplication, inconsistent decisions, and declining public trust. Build a privacy-preserving, cross-agency citizen data layer — enabling better decisions, faster service delivery, and dramatically improved transparency. Adopt best practices to:

Create a unified identity and case-data graph, resolving discrepancies across decades-old systems while respecting PHIPA/FOIPPA, CPPA, and sector-specific statutes.

Implement consent-aware data sharing, where citizen permissions dynamically shaped what information can be accessed, by whom, and for which purpose.

Establish a secure, real-time interoperability layer using APIs, event streams, and standardized schemas to connect health, social, housing, justice, and benefits systems.

Use explainable analytics to support consistent, auditable decisions in eligibility, triage, and benefits processing.

 

Enrich public service delivery by integrating identity, consent, and case data to enhance responsiveness and fairness.

12:00 pm

CASE STUDY

TRACK 3: REGULATED & COMMERCIALLY DRIVEN

From Batch to Instant: How One Enterprise Built a Real-Time Customer Intelligence Layer That Transformed CX and Reduced Churn

Many customer-centric organizations still rely on batch data, overnight refreshes, and siloed systems — making it impossible to respond in the moments customers actually need help, show intent, or signal frustration. This case study highlights how one large Telco made the leap to real-time customer intelligence, driving measurable gains in churn reduction, NPS, and revenue. Create a roadmap to:

Build a single, low-latency customer state store, integrating streaming events, CRM data, behavioural signals, and operational data with sub-second freshness.

Replace batch decisioning with a real-time next-best-action engine, enabling proactive interventions during key high-friction moments (billing, outages, cart abandonment, and call-centre intent detection).

Implement consent-aware identity resolution, ensuring personalized actions remain compliant with CPPA, CASL, and industry data-handling rules.

Leverage real-time feature engineering to power AI models for churn prediction, dynamic offers, anomaly detection, and personalized service recovery.

 

Heighten business outcomes by moving from batch to real-time decisioning to increase revenue and strengthen trust.

12:00 pm

CASE STUDY

TRACK 4: INDUSTRIAL & OPERATIONAL

How YVR Built a Federally Regulated, Revenue-Generating Digital Twin Platform

Since the 2010 Winter Olympics, Vancouver International Airport (YVR) has championed in-house innovation with a dedicated software and analytics team focused on operational excellence. YVR’s homegrown digital twin platform — an integrated operational system that consolidates real-time data from airport infrastructure, air carriers, ports, and government departments — improves situational awareness, streamlines decisions, and optimizes costs per passenger. Walk away with an action plan on:

Developing a modular digital twin in-house, sidestepping generic vendor solutions.

Breaking down data silos between private companies and federal agencies through robust governance; YVR successfully collaborated with Air Canada.

Leveraging cross-sector data sharing to model disruptions and scenarios across airside, terminal, and landside environments.

Creating a scalable platform with immediate ROI in 2023, with YVR licensing its platform to other ports.

Applying analytics to optimize its diverse revenue streams beyond aeronautical charges.

 

Advance cross-sector collaboration by building digital twin capabilities to enhance decision-making and efficiency.

12:30 pm

NETWORKING LUNCH:
DELVE INTO INDUSTRY CONVERSATIONS

  • Meet interesting speakers and pick their brains on the latest data issues.
  • Expand your network and make connections that last beyond the conference.
  • Enjoy great food and service while engaging with your big data and analytics colleagues.

1:30 pm

EXHIBITOR LOUNGE: VISIT BOOTHS & WIN PRIZES

  • Browse through different sponsor booths and test drive innovative technology.
  • Enter your name for a chance to win exciting prizes.
  • Take advantage of event-specific offers and exclusive content.

 

1:45 pm

CASE STUDY

TRACK 1: ASPIRING SENIOR LEADERS

Building Credibility with Executive Teams Through Balancing Short- and Long-Term Data Value Creation

Establishing trust and credibility with executive leadership is crucial for data teams seeking to drive strategic initiatives. Data leaders must decide between delivering immediate, tangible value through iterative projects and investing in foundational transformations that yield long-term benefits. Find the right balance for your organization, educate executives, and manage expectations with the board. Source practical tips to:

Develop a roadmap that prioritizes both short-term wins and long-term data foundation enhancements.

Communicate the ROI of data projects effectively to executive teams, helping them appreciate the art and science of data.

Foster a collaborative environment where executives feel empowered to both challenge and support data leaders.

 

Improve your culture of data-driven decision-making to align with, and be led by, executive leadership.

1:45 pm

CASE STUDY

TRACK 2: TECHNICAL MASTERS

Driving Innovation Through Strategic Vendor Partnerships in Data and Analytics

In today’s rapidly evolving data landscape, managing vendor relationships effectively is vital for fostering innovation and adopting cutting-edge technologies. Navigating an intricate eco-system of vendors that provide platforms and capabilities at scale while meeting unique organizational needs requires specialized skills in influence, co-design, and partnership. Achieve a step-by-step action plan to:

Influence vendors to align their product development pathways with your organizational needs.

Facilitate collaboration between vendors to drive interoperability and critical functionality.

Harness external expertise to introduce innovative solutions that enhance data capabilities and analytics outcomes.

 

Transform your organization’s data and analytics innovation journey through strategic vendor partnerships to maximize impact and adoption.

2:15 pm

WORKSHOP

TRACK 1: ASPIRING SENIOR LEADERS

Mastering Objection Handling: Preparing and Delivering Effective Responses When Requesting Resources, Proposals, and Getting Buy-In for Your Business Case  

Mastered by salespeople but in huge demand from data and analytics leaders, objection handling is as much a science as it is an art. Take back to your office strategies to:

Build confidence in your ability to address objections.

Understand the roots of objections related to models, scalability, or the impact of various data analytics activities.

Deliver confident, thoughtful, and well-structured responses to your executives.  

 

Enrich your objection-handling skills to help secure approval for your most important proposals.

2:15 pm

WORKSHOP

TRACK 2: TECHNICAL MASTERS

Bridging the Gap: Balancing Technical Expertise and Business Acumen in Data-Driven Strategies

Building successful data and analytics initiatives requires more than technical skills alone. Balancing deep technical expertise with strong business knowledge drives more relevant and impactful analytics, supports data-driven decision-making, and strengthens alignment with organizational priorities. Adopt best practices to:

Structure cross-functional teams that integrate data science and business expertise.

Cultivate a shared language and approach for data projects that align with core business goals.

 

Bolster your strategic impact and organizational buy-in to drive sustainable growth and long-term alignment.

3:00 pm

EXHIBITOR LOUNGE:
ATTEND VENDOR DEMOS & CONSULT INDUSTRY EXPERTS

  • Enjoy exclusive sponsor demos and experience the next level of big data and analytics innovation firsthand.
  • Meet one-on-one with leading solution providers to discuss organizational hurdles.
  • Brainstorm solutions and gain new perspectives and ideas.

 

3:30 pm

PANEL

Beyond Databases: The Future Is Vector-Native — How Retrieval-First Architectures Will Transform Enterprise AI

Vector- and retrieval-native architectures are rapidly emerging as the backbone of modern enterprise AI. By combining semantic indexes, embedding pipelines, and retrieval-augmented generation (RAG), organizations can dramatically improve model accuracy, latency, and reliability. This enables new forms of real-time intelligence that traditional relational or columnar databases can’t handle. Master the success factors to:

Build embedding pipelines and vector stores at scale for enterprise workloads.

Implement retrieval-augmented generation for knowledge-driven AI applications.

Integrate vector-native systems with traditional relational and lakehouse platforms.

Prepare your organization for the next wave of AI innovation with vector-first design principles.

 

Enrich your enterprise data and AI pipelines to move from traditional storage models to a vector-native foundation built for semantic intelligence, real-time retrieval, and next-generation analytics.

4:00 pm

CASE STUDY

Streaming Everything: How Real-Time Data, Event Meshes, and Autonomous Pipelines Will Power the Next Generation of AI

Real-time, event-driven architectures are transforming how organizations collect, process, and act on data. By unifying operational data, analytics, and AI workloads on a single real-time backbone, enterprises can move from batch decision-making to continuous intelligence, enabling faster insights, automated interventions, and predictive outcomes. Source your plan of action by:

Building low-latency, event-driven pipelines that connect IoT, transactional, and business data streams.

Implementing real-time feature stores and streaming ML workflows for predictive and autonomous decisioning.

Designing self-healing data pipelines with observability, alerting, and automated remediation.

Integrating streaming systems with existing data lakes, warehouses, and analytics platforms.

Preparing for the next generation of AI-driven automation and agentic systems with streaming-first design principles.

 

Advance continuous intelligence across your organization to unlock real-time insight and AI-driven autonomy.

4:30 pm

CLOSING COMMENTS FROM YOUR HOST

Review the key solutions and takeaways from the conference. Source a summary of action points to implement in your work.

5:00 pm

CONFERENCE CONCLUDES