(looking for talks: if you or someone you know would make a great Matrix seminar presenter, please email the Director with the details on the talk and speaker.)

Stay up to date with events by subscribing to our (low-volume!) mailing list.


The Matrix Seminar Series

Next Talk

March 17, 2023: Panel on Next-Gen Generative AI
Time: 10:30am-11:30am.
Location: ECS 660.
Title: Panel on Generative AI.
Abstract: Discussion about DALL-E, ChatGPT, and academia. What is the current state of these tools? What do we anticipate happening, and how should academia and industry prepare?
Speakers: featuring panelists Jentery Sayers (English), Callum Curtis (SE undergrad), Yun Lu (CSC), Valerie Irvine (Education), George Tzanetakis (Music/CSC). Moderated by Matrix Director Neil Ernst.
RegistrationRegister for in-person. Register for online.


Upcoming Talks

April 28, 2023: Polly Allen: The Risks of Unreliable Genius: Lessons Learned from Working With Generative AI at Amazon Alexa

April 28, 2023, 10:30am-11:30am. ECS 660. Title: The Risks of Unreliable Genius: Lessons Learned from Working With Generative AI at Amazon Alexa. Abstract: Generative text AI has just recently caught the public’s imagination, but the technology underlying ChatGPT has been well known in industry for years. In this talk, a former Principal Product Manager for Alexa, will demystify what generative text AI models are, what new capabilities they bring, and what risks they introduce. Using real-world examples she’ll illustrate what’s possible, and how professionals in the industry assess and mitigate risks – from user acceptance to legal compliance to ethical considerations – to get to a launchable feature or product. Bio: Polly Allen has over 20 years experience developing software, building and leading software teams and most recently leading data science and engineering teams as a Principal Product Manager for Alexa AI at Amazon, where she led generative artificial intelligence projects (similar to ChatGPT) for Alexa. As a leader in the application of Machine Learning, she is passionate about DEI in the space and empowering more people to understand, leverage and participate in the field. She founded the world’s first AI-focused career accelerator (AI Career Boost) in November 2022, aiming to increase diversity in the AI industry. Polly is an experienced angel investor, a board member at the Center for Workforce Inclusion Labs, and futurist keynote speaker. She holds a M.Sc. in Software Engineering from MIT and the University of Victoria, and an MBA from the University of British Columbia.

April 21, 2023: Thomas Baker, Quantum Applications

April 21, 2023, 10:30am-11:30am. ECS 660. Thomas Baker, Physics/Chemistry.


Other Events


Past Events

February 17, 2023: The writing is on the wall. But can we see it?

Richard Brath, Managing Partner, Commercial Innovations, Uncharted Software Abstract. With the rise of ChatGPT, Fake News, cyber-bullying, and other communication innovation, the critical analysis of text has never been more important. Yet our analytical toolbox has been overly focused on the analysis and visualization of quantitative data for the last 40 years. Data visualization, which provides quick visual access to patterns, has made recent advancements for text visualization. These new visualizations readily show patterns in diverse examples such as character analysis, political composition, music lyrics, oil vs human rights, and more. Bio. Richard Brath is a long time researcher and practitioner in data visualization. Commercially, visualizations by Richard’s team at Uncharted Software, are in use by hundreds of thousands of users in fields such as capital markets, supply chain, health care and major league sports. From a research perspective, Richard recently completed a PhD, and has published the books Graph Analysis and Visualization (Wiley 2014) together with David Jonker, and the new book Visualizing with Text (CRC Visualization series, 2021).

June 15, 2022: Things We Could Design

Ron Wakkary, Professor, School of Interactive Arts and Technology, Simon Fraser University
June 15, 2022
10am to 11am
ECS 108 and on Zoom
View the video of this presentation

In this talk, Ron Wakkary will discuss his recent book, Things We Could Design for more than Human-Centered Worlds (MIT Press 2021). The book is a critical and creative speculation on designing-with: a relational and expansive design based on humility and cohabitation. The exploration aims for an alternative to human-centered design that is rooted in humanism that begets a human exceptionalism founded on ongoing oppressions, exploitation of others, and extractive relations with nonhuman species and matter. The book weaves together posthumanist philosophies with things to critically imagine designing for a world of differentiated humans entangled in an equal fate with all that is not human. The talk will discuss the journey toward concepts of the speaking subject, the human role of gathering and speaking with humans and nonhumans in assemblies that make up designers; biography that describes the shared agencies of designer and things for what they jointly inscribe into our worlds and leave behind; and constituency that seeks forms of collective structures that gather to design across the politics of humans and nonhumans.

Ron Wakkary is a Professor in the School of Interactive Arts and Technology, Simon Fraser University in Canada where he founded the Everyday Design Studio ( In addition, he is a Professor and Chair of Design for More Than Human-Centred Worlds in the Future Everyday Cluster in Industrial Design, Eindhoven University of Technology in the Netherlands. Wakkary’s research investigates the changing nature of design in response to new understandings of human-technology relations and posthumanism. He aims to reflectively create new design exemplars, theory, and emergent practices to contribute generously and expansively to understanding ways of designing that are more accountable, cohabitable, and equitable.

May 6, 2022: Machine Learning for Personalized Curation and Rights Protection at DeviantArt

Peter Gorniak, Head of Artificial Intelligence Engineering, DeviantArt
May 6, 2022
Time 11am to 12pm
ECS 108 and on Zoom
View the video of this presentation

DeviantArt is the largest online social network for artists and art enthusiasts. With almost half a billion works of art it falls to the company’s Machine Learning and Artificial Intelligence team to build the algorithms and infrastructure that decide when to show which art to who. I will present our recent work on personalized item-item, user-item and user-user recommendation systems and their impact on community engagement. I will also discuss DeviantArt Protect, our copyright infringement detection system in the Non Fungible Tokens (NFTs) space. Our production systems draw from a wide array of social graph and visual deep learning systems. I will focus especially on our design of fast-to-production, reusable and lean machine learning architectures that scale to the high data volume and throughput requirements at DeviantArt.

Peter Gorniak is Head of Artificial Intelligence, Engineering, at DeviantArt and an adjunct professor at Simon Fraser University. A native of Germany, Peter received a B.Sc. and M.Sc. in Computer Science from the University of British Columbia, Canada, and a Ph.D. from MIT. He has published scientific work on machine learning, user modelling, the representation of concepts, situated language understanding and plan recognition in computer games in leading computer and cognitive science journals. He subsequently applied his research in the computer games industry as studio AI lead at Mad Doc Software/Rockstar New England and on Max Payne 3 at Rockstar Vancouver. Peter has spent time teaching and researching games as an assistant professor at Simon Fraser University.

March 13, 2020: BC Stats — Data Science for Policy & Operational Decisions

Julie Hawkins, Martin Monkman & Stephanie Yurchak, BC Stats, Ministry of Jobs, Economic Development and Competitiveness
March 13, 2020
2pm to 3pm
ECS 660
View the slides from this presentation

BC Stats, the statistics bureau of the Province of British Columbia, has been providing quantitative evidence to support decision-making for over 125 years. In the past 5 years, the organization has moved purposefully to adopt data science practice, methods, and tools. This has been centred on the open source language R and other elements in that ecosystem.

We will present:
• The motivations behind the use of R, including reproducibility and other strategies to make our workflow more robust;
• Examples of the kinds of problems that a data scientist in the public service confronts; and
• How those problems have been tackled.

Julie Hawkins, B.Sc.

Data Scientist
Julie joined BC Stats in 2001 as a Research Analyst, moved up the ranks as a Senior Research Analyst, Team Leader and then Manager, but demoted herself out of management to return to working directly with data. On the relatively new Data Science Team, Julie has been involved in multiple projects with R. Such projects include automating or improving co-workers’ programs (e.g., running the population app in Shiny, determining the top drivers of engagement from many possibilities, reporting on cumulative monthly housing data), writing an R replacement of outdated/expensive/confusing software (e.g., structural equation modelling in R instead of SPSS AMOS, raking population estimates in R instead of APL), auto-suppressing data results to maintain respondent confidentiality, conducting survey data analysis with custom functions from basic table creation (e.g., New Job Survey data analysis, BC Knowledge Development Fund Report) to an online dashboard or report generation (e.g., Elections BC Voters’ List Quality), and teaching opportunities (e.g., creation of in-house R package tutorials, presentation at the inaugural BCGov useR! Workshop).

Julie earned a Bachelor of Science degrees in Psychology, with a Minor in Statistics, from the University of Victoria. She tries to entice her sons to learn and use R, and has had many conversations with her sister about R functions and packages.

Martin Monkman, B.Sc., M.A.
Provincial Statistician & Director
Building on his previous experience analyzing data in a variety of contexts, Martin first joined BC Stats (British Columbia’s statistics bureau) in 1993. In subsequent years, Martin has built a wide range of experience using data science to support evidence-based policy and business management decisions. Now the Provincial Statistician & Director at BC Stats, Martin leads a dynamic and innovative team of data scientists in analyzing statistical information about the economic and social conditions of British Columbia, and measuring public sector organizational performance.
Martin also teaches Data Analytics Coding Fundamentals (BIDA 302) at UVic’s Continuing Studies, has taught an introductory R course at Simon Fraser University’s City Program, and is working on becoming a Carpentries instructor.

Martin holds Bachelor of Science and Master of Arts degrees in Geography from the University of Victoria., is a member of the Statistical Analysis Committee of the Society for American Baseball Research (SABR), and occasionally blogs about data science and the analysis of baseball statistics using R, the open-source language and environment for statistical computing and graphics.

Stephanie Yurchak, B.Sc.
Data Scientist
Stephanie joined BC Stats in 2015. She has predominantly been involved with government wide surveys, leading the analysis and reporting. Due to her R knowledge, her participation in these survey projects has reduced timelines for delivering reports and changed future expectations. Since joining the Data Science team in 2018, she continues to be a go-to person for report automation across BC Stats, and has branched out into new projects such as writing the BC Retail Sales Web App in Shiny and performing as a Subject Matter Expert for a contracted web development project.

Before joining BC Stats, Stephanie worked as a Research Assistant at the University of Victoria and a Methodologist at Statistics Canada. She earned a Bachelor of Science in Combined Mathematics and Statistics from the University of Victoria.

February 6, 2020: The Deep and Dark Web

Mike Anderson, COO, Echosec Systems
February 6, 2020
3pm to 4pm
ECS 660
View the video of this presentation

The terms deep web and dark web are often used interchangeably and associated with online criminal activity. Many sources also describe the dark web as a place physically separate from the internet as we know it. In reality, the deep web and dark web are two very different things, and they function alongside the surface web rather than in compartmentalized sections of a digital space. The dark web is an evolving system to access websites, much the same as the system of DNS and search engines that we call the internet.

We are often asked, what are the deep web and the dark web? What do people (and criminals) actually do on them? And how do you find and stop criminal activity on the dark web? This seminar covers these questions, as well as the mechanics of dark web protocols like TOR and I2P, and how dark web properties facilitate criminal activity.

Mike Anderson is COO and Co-Founder of Echosec Systems. Echosec is a software company specializing in online threat discovery using data from social media and the dark web. Mike co-founded Echosec in 2013 as part of the UVic Entrepreneurial Engineering Masters Program. In 2019 he completed his Master’s thesis in distributed systems observability based on his work at Echosec, where he currently oversees operations. If you spot Mike away from the office, he is likely tapping into his passion for putting ill-fitting electronics together in innovative ways.

December 6, 2019: Protecting Reason from the Data Crunch

Dr. Jorge Aranda, Senior Software Development Engineer at Workday
December 6, 2019
3pm to 4pm
ECS 660

The recent astonishing successes of data science and machine learning techniques make it seem as if we could solve the world’s problems with models and neural networks. They have made us giddy and overconfident. As a result, we often fail to reason about what we do—to ask “why?”, to question our assumptions and the validity of our datasets, to consider human nature and second-order effects. In this talk I will discuss the practical consequences of this failure, as well as several remedies I have seen in the field to help overcome it.

Jorge Aranda is a Senior Software Development Engineer at Workday’s Video Intelligence team. Since receiving his Ph.D. in Computer Science from the University of Toronto in 2010, he has worked on applied research data science and machine learning projects in a variety of settings.

November 1, 2019: Seq, a High-Performance Language for Bioinformatics

Dr. Ibrahim Numanagić, Department of Computer Science, University of Victoria
Canada Research Chair in Computational Biology and Data Science
November 1, 2019
3pm to 4pm
ECS 660

The scope and scale of biological data is increasing at an exponential rate, as technologies like next-generation sequencing are becoming radically cheaper and more prevalent. Over the last two decades, the cost of sequencing a genome has dropped from $100 million to nearly $100—a factor of over million—and the amount of data to be analyzed has increased proportionally. Yet, as Moore’s Law continues to slow, computational biologists can no longer rely on computing hardware to compensate for the ever-increasing size of biological datasets. In a field where many researchers are primarily focused on biological analysis over computational optimization, the unfortunate solution to this problem is often to simply buy larger and faster machines.

Slides from this talk are available here.

In this talk, I will introduce Seq, the first language tailored specifically to bioinformatics, which marries the ease and productivity of Python with C-like performance. Seq is a subset of Python—and in many cases a drop-in replacement—yet also incorporates novel bioinformatics- and computational bioinformatics applications. On equivalent CPython code, Seq attains a performance improvement of up to two orders of magnitude, and a 175x improvement once domain-specific language features and optimizations are used. Compared to optimized C++ code, which is already difficult for most biologists to produce, Seq frequently attains up to a 2x improvement, and with shorter, cleaner code. Thus, Seq opens the door to an age of democratization of highly-optimized bioinformatics software.

Ibrahim Numanagić is an Assistant Professor and Canada Research Chair (Tier 2) in Computational Biology and Data Science at the University of Victoria. He was a postdoctoral associate in the Computation and Biology Group at MIT CSAIL. He received his B.Sc. from the University of Sarajevo in Bosnia and Herzegovina, and his M.Sc. and Ph.D. in Computer Science from Simon Fraser University. His research focuses on developing efficient and scalable combinatorial algorithms and tools to help analyze vast amounts of genomic sequencing data.

January 16, 2020: Beyond Visualization Wizardry — the Role of Interaction in Data Visualization

Dr. Charles Perin
Department of Computer Science, University of Victoria
January 16, 2020
3pm to 4pm
ECS 660

Visualization is not just a way of creating pretty pictures and “intuitive dashboards”. It is not a magic wand that you can apply to your dataset to automatically turn a data mess into “actionable insights for transformative results”. Far from this wizardry, I argue that understanding data comes at the cost of interacting with it. I will go through recent research projects I have worked on in recent years – ranging from manual reordering of matrices to the notion of active reading of visualizations to the idea of personal agency to the challenge of interaction discoverability to adorable micro-robots to the paradigm of direct manipulation of graphical encodings  – to provide many angles on the topic of interaction in visualization. I will then synthesize this research in the context of my most recent work, in which we (attempt to) answer the question every visualization researcher (and teacher) is striving for: what is interaction for data visualization?

Charles Perin is an Assistant Professor of Computer Science at the University of Victoria, where he leads the UViz research group specializing in information visualization and human computer interaction. At UViz we are particularly interested in designing and studying new interactions for visualizations and in understanding how people may make use of and interact with visualizations in their everyday lives; in designing visualization tools for authoring personal visualizations and for exploring and communicating open data; in sports visualization; and in visualization beyond the desktop. Before joining Victoria in 2018, Charles was a Lecturer in London (the real one), before that a post-doctoral researcher in Calgary, before that a PhD student in Paris, and long before that a kid in Brittany.

June 11, 2019: Matrix Annual Symposium

The 2019 Matrix Institute for Applied Data Science Symposium will be held on June 11 between 11am and 5pm in the Bob Wright Centre.

Our keynote speaker this year will be Dan Russell, Google Senior Research Scientist and leader of the Search Quality & User Happiness group. We will also be featuring talks from our cross-campus and industry collaborators, as well as a Matrix Subspace student reception at the end of the day.


11:00 AM: Welcome to the 2nd annual Matrix Symposium

11:15 AM: Cross-campus collaborations

  • Kim Venn, Professor, Department of Astronomy and Physics, Machine Learning for Astronomy and the Maunakea Spectroscopic Explorer
  • Luis Meneses, Postdoc, Department of Humanities, Mining and Discovery Tools in Open Access Repositories
  • Maycira Costa, Professor, Department of Geography, Data Science in Remote Sensing and Geography

12:30 PM: Lunch by RSVP

1:00 PM: Keynote

2:00 PM: Break

2:30 PM: Industry collaborations

  • Anthony Theocharis, Senior Director of Software Development, Workday, Data Science and Machine Learning at Workday
    Richard Egli, Director, Alacrity Canada, and Stephen Neville, Associate Professor, Department of Electrical and Computer Engineering, Entrepreneurship@UVic
  • Jacques Van Campen, Director of Innovation, South Island Prosperity Project, and David Bristow, Assistant Professor, Department of Civil Engineering, Smart Cities and the South Island Prosperity Project
  • James Colliander, Director, Pacific Institute for the Mathematical Sciences, PIMS and Syzygy

4:00 PM: Subspace student-led poster reception, sponsored by Workday

5:00 PM: Finis

April 11, 2019: Is Greedy Coordinate Descent a Terrible Algorithm?

Dr. Mark Schmidt, Department of Computer Science, UBC
Canada Research Chair in Large-Scale Machine Learning
April 11, 2019
3pm to 4pm
ECS 660

There has been significant recent work on the theory and application of randomized coordinate descent algorithms, beginning with the work of Nesterov [2012], who showed that a random-coordinate selection rule achieves the same convergence rate as the Gauss-Southwell selection rule [which picks the best coordinate at each iteration]. This result suggests that we should never use the Gauss-Southwell rule, because it is typically much more expensive than random selection. However, the empirical behaviours of these algorithms contradict this theoretical result: in applications where the computational costs of the selection rules are comparable, the Gauss-Southwell selection rule tends to perform substantially better than random coordinate selection. We give a simple analysis of the Gauss-Southwell rule showing that—except in extreme cases—its convergence rate is faster than choosing random coordinates. We also (i) show that exact coordinate optimization improves the convergence rate for certain sparse problems, (ii) propose a Gauss-Southwell-Lipschitz rule that gives an even faster convergence rate given knowledge of the Lipschitz constants of the partial derivatives, (iii) analyze the effect of approximate Gauss-Southwell rules, (iv) analyze proximal-gradient variants of the Gauss-Southwell rule, and (v) show that these fast rates can be achieved on some non-convex problems.

Mark Schmidt has been an assistant professor in the Department of Computer Science at the University of British Columbia since 2014. He is a Canada Research Chair, Alfred P. Sloan Fellow, and Senior Fellow in the Canadian Institute for Advanced Research (CIFAR) Learning in Machines and Brains program. His research focuses on developing faster algorithms for large-scale machine learning, with an emphasis on methods with provable convergence rates and that can be applied to structured prediction problems. From 2011 through 2013 he worked at the Ecole normale superieure in Paris on inexact and stochastic convex optimization methods. He finished his M.Sc. in 2005 at the University of Alberta working as part of the Brain Tumor Analysis Project, and his Ph.D. in 2010 at the University of British Columbia working on graphical model structure learning with L1-regularization. He has also worked at Siemens Medical Solutions on heart motion abnormality detection, with Michael Friedlander in the Scientific Computing Laboratory at the University of British Columbia on semi-stochastic optimization methods, and with Anoop Sarkar at Simon Fraser University on large-scale training of natural language models. Along with Nicolas Le Roux and Francis Bach, he was awarded the SIAM/MOS Lagrange Prize in Continuous Optimization in 2018.

October 18, 2019: What Every Software Engineer Ought to Know About Data Science

Dr. Greg Wilson
October 18, 2019
3pm to 4pm
ECS 660

Engineering has been defined as “the use of the scientific method to design and build new things”, but software engineering courses rarely require students to conduct experiments or analyze data. This talk describes what such a course would look like, what its benefits would be, and how we can get there from here.

Slides from this talk are available here.

Dr. Greg Wilson has worked for 35 years in both industry and academia, and is the author or editor of several books on computing and two for children. He is best known as the co-founder of Software Carpentry, a non-profit organization that teaches basic computing skills to researchers, and is now part of the education team at RStudio.

February 27, 2019: Data, Data Analysis, and Machine Learning in Astrophysical Stellar Spectroscopic Surveys

Dr. Kim Venn, Department of Physics and Astronomy, University of Victoria
Canada Research Chair in Exploration and Understanding of Space
February 27, 2019
3pm to 4pm
ECS 660

To unravel the formation history of the Milky Way, spectroscopic surveys are currently being carried out to gather chemical abundance ratios and kinematic information of stars throughout the Galaxy. High-resolution spectra of ~1 million stars are being collected through the US SDSS-APOGEE survey, the Australian GALAH survey, and the European ESO-Gaia survey, while lower resolution spectra have been collected through other US SDSS surveys, the Chinese LAMOST survey, and the European Gaia mission.  It is important that these spectral datasets be analysed homogeneously to have the highest scientific impact.  For this reason, various data-driven analysis tools have been developed, often combining priors that model the individual spectra and/or the stellar populations.

More recently, machine-learning techniques have been used to examine synthetic spectra and train a neural network for very fast and efficient analyses.

The quality of these results is under constant evaluation, e.g., results from the data-driven approaches are assumed to be drawn from physically sensible features in the stellar spectra and not just the exploitation of astrophysical correlations between different chemical elements.  I will review the science goals, data, and various data analysis approaches, for these and other forthcoming spectroscopic surveys.

Kim Venn is a Canadian astronomer whose scientific expertise is on the spectral analysis of stars, focusing on the chemistry of stars in globular clusters and the nearby dwarf galaxies.  Her early research was recognized with a US Presidential Early Career Award in Science and Engineering (2000), and she joined UVic in 2005 as a Canada Research Chair (Tier II).  She also has a strong interest in astronomical instrumentation, ranging from adaptive optics to fibre-fed spectroscopy, and was the co-recipient of the UVic REACH Award for Excellence in Research Partnerships (2018) for work on the RAVEN Multi-Object Adaptive Optics science demonstration instrument, with Prof. Colin Bradley (UVic Mech Eng).  In 2015, she founded and became the first Director of the UVic Astronomy Research Centre, and in 2017 she was the PI for a successful NSERC CREATE award to develop a national training program on New Technologies for Astronomical Observatories.  She is currently interested in the efficient and high quality analysis of stellar spectral surveys through machine learning and other data analysis techniques.

February 13, 2019: Incorporating Intelligent Language Analysis into Educational Technology

Dr. Fred Popowich, School of Computing Science, Simon Fraser University
February 13, 2019
3pm to 4pm
ECS 660

Given society’s current interest in the application of artificial intelligence techniques to a wide range of activities, in conjunction with the vast amount of data now available, it is not surprising to see increased interest in how these techniques can be applied in the context of Technology Enhanced Learning (TEL).  Specifically, we are investigating how the analysis of natural language phrases contained in the writings of learners (for example, in essays, in short answers to questions), can be leveraged to assist learners in their writing.  After a brief introduction to some key issues in Big Data, including an overview of activities associated with KEY, SFU’s Big Data Initiative, we will provide details on a system that can automatically make recommendations to a learner to improve his or her writing.

Dr. Fred Popowich is a leading computing scientist and seasoned administrator at Simon Fraser University (SFU), Canada’s leading engaged university. As Scientific Director of SFU’s Big Data Initiative, he is responsible for leading and implementing KEY, SFU’s Big Data Initiative that engages people in advanced computing for innovation in teaching, research, and community impact. His other roles at the School of Computing Science have included Associate Director Research and Industry Relations, Director of the Professional Master’s Program in Big Data, and Associate Dean.

Dr. Popowich received his PhD in Artificial Intelligence and Cognitive Science from the University of Edinburgh in 1989 and since then has been a faculty member in the School of Computing Science at SFU, as is also an Associate Member of the Linguistics Department. His research is concerned with how computers can be used to process human language, either to make it easier for human beings to interact with computers, or to make it easier for human beings to interact with each other. As such, he has been concerned with how knowledge about language and the world can be represented, maintained, and even learned by computers. Typical real-world applications of this research include “smart homes”, the automatic translation of language, tools to assist people in learning language, and technology to help people search and manage the vast amount of information contained on computer systems and networks.

March 13, 2019: Natural Language Question Answering in the Financial Domain

John Boyer, PhD
IBM Distinguished Engineer, Master Inventor, & IBM Q (Quantum Computing) Ambassador
March 13, 2019
3pm to 4pm
ECS 660

The intent of this presentation to help attendees see a holistic view of the practice of natural language processing by examining an array of technical issues that must be jointly addressed to fully solve a real-world set of use case requirements from a specific domain. In this presentation, we will examine a natural language question answering system focused on answering financial domain questions using a daily-updated corpus of financial reports. Financial entity types of interest included company stocks, country bonds, currencies, industries, commodities, and diversified assets. Financial questions of interest included explanatory and factual questions about entities as well as financial outlook for entities.

The first architectural divergence that emerged in the system was the distinction between how natural language processing is normally practiced to answer informational questions versus the practices that are required to answer financial outlook questions. We will also cover additional challenges addressed by the system in the areas of document ingestion, question classification accuracy, the practical speed of machine learning, answer ranking by linguistic confidence versus temporality, and system accuracy assessment.

John Boyer is a Distinguished Engineer, Master Inventor, and IBM Q (Quantum Computing) Ambassador at IBM. He received his PhD in Computer Science from the University of Victoria. You can read more about John here.

December 12, 2018: How Deep Learning Will Shape How We Understand Computations in the Brain

Dr. Tim Kietzmann, University of Cambridge
December 12, 2018
3pm to 4pm
ECS 660

The goal of computational neuroscience is to find mechanistic explanations of how the nervous system processes information to give rise to cognitive function and behaviour. At the heart of the field are its models, i.e. mathematical and computational descriptions of the system being studied, which map sensory stimuli to neural responses and/or neural to behavioural responses. These models range from simple to complex. Recently, deep neural networks (DNNs) have come to dominate several domains of artificial intelligence (AI). As the term “neural network” suggests, these models are inspired by biological brains. However, current DNNs neglect many details of biological neural networks.

These simplifications contribute to their computational efficiency, enabling them to perform complex feats of intelligence, ranging from perceptual (e.g. visual object and auditory speech recognition) to cognitive tasks (e.g. machine translation), and on to motor control (e.g. playing computer games or controlling a robot arm). In addition to their ability to model complex intelligent behaviours, DNNs excel at predicting neural responses to novel sensory stimuli with accuracies well beyond any other currently available model type. DNNs can have millions of parameters, which are required to capture the domain knowledge needed for successful task performance. Contrary to the intuition that this renders them into impenetrable black boxes, the computational properties of the network units are the result of four directly manipulable elements: input statistics, network structure, functional objective, and learning algorithm. With full access to the activity and connectivity of all units, advanced visualization techniques, and analytic tools to map network representations to neural data, DNNs represent a powerful framework for building task-performing models and will drive substantial insights in computational neuroscience.

Tim Kietzmann is a Researcher and Graduate Supervisor at the MRC Cognition and Brain Science Unit of the University of Cambridge. He investigates principles of neural information processing using tools from machine learning and deep learning, applied to neuroimaging data recorded at high temporal (EEG/MEG) and spatial (fMRI) resolution.

December 5, 2018: The Interface Between Engineering and Data Science – Tools and Applications

Dr. Ralph Evins, Department of Civil Engineering, University of Victoria
December 5, 2018
3pm to 4pm
ECS 660

Engineers have embraced the power of computational optimisation and simulation methods to explore complex trade-offs between conflicting objectives. However, much of this is based on long-standing physics-based models that do not necessarily match reality. Data is now becoming ever more readily available, but is not being leveraged in the design process. New approaches to modelling are needed for further progress. This includes modularised software development, the blending of machine learning and data driven methods with physics-based simulations, and ways of incorporating this into the design process.

Dr. Evins’ doctoral thesis on “Multi-objective optimisation as an aid to design space exploration for low-carbon buildings” explored new ways of using computational tools to deliver high-performance buildings. He was a post-doctoral researcher then Group Leader at the Urban Energy Systems laboratory at Empa / ETH Zurich in Switzerland, working on district-scale energy systems optimization. There he led the development of the Holistic Urban Energy Simulation platform. As an Assistant Professor at the University of Victoria, Dr. Evins is leading projects to develop the Building and Energy Systems Optimisation and Surrogate modelling platform (BESOS) and to deliver a visualization tool based on this for use by industry. He is a Chartered Engineer with CIBSE.

January 23, 2019: Spectral Dynamic Causal Modelling of Resting-State fMRI – Relating Effective Brain Connectivity in the Default Mode Network to Genetics

Dr. Farouk Nathoo, Department of Mathematics and Statistics, University of Victoria
Canada Research Chair in Biostatistics for Spatial and High-Dimensional Data
January 23, 2019
3pm to 4pm
ECS 660

We conduct a novel imaging genetics study of the Alzheimer’s Disease Neuroimaging Initiative based on resting-state fMRI (rs-fMRI) and genetic data obtained from 112 subjects, where each subject is classified as either cognitively normal (CN), as having mild cognitive impairment (MCI), or as having Alzheimer’s Disease (AD). A Dynamic Causal Model (DCM), a state space model for neuronal dynamics, is fit to the rs-fMRI time series in order to estimate a directed network representing effective brain connectivity within the default mode network (DMN), a key network commonly known to be active when the brain is at rest. These networks are then related to genetic data and Alzheimer’s disease in the first imaging genetics study to use DCM as a neuroimaging phenotype. Our proposed pipeline is comprised of four analyses linked together with the objective of shedding light on the relationship between brain connectivity and genetics in relation to disease. In the first analysis we examine differences in effective connectivity across disease groups. In the second analysis we relate the probability of disease to genetics and obtain a subset of priority single-nucleotide polymorphisms (SNPs) potentially related to disease. In the third analysis we investigate how effective brain connectivity is related to the subset of priority SNPs obtained in the second study. In the fourth and final analysis we examine longitudinal data on changes with respect to MCI and AD through a classifier in order to determine how well disease progression can be predicted from the combination of effective brain connectivity and genetic data. Our new pipeline for connectome genetics has general applicability and its specific application to the ADNI study motivates a number of future studies in this nascent area.

Dr. Farouk Nathoo is an Associate Professor and Tier 2 Canada Research Chair in the Department of Mathematics and Statistics at the University of Victoria, and an Adjunct Professor in the Department of Statistics and Actuarial Science at Simon Fraser University. His primary areas of research are the analysis of neuroimaging data, imaging genetics, statistical modelling, and Bayesian methods.

October 31, 2018: When Notebooks Are Not Enough – Constructing Workflows for Reproducible Analytics

Dr. Andriy Koval
Health System Impact Fellow, Observatory for Population and Public Health, UBC
Data Science Studio, Institute on Aging and Lifelong Health, University of Victoria
October 31, 2018
3pm to 4pm
ECS 660
Slides can be found here

While computational notebooks offer scientists and engineers many helpful features, the limitations of this medium make it but a starting point in creating software – the practical goal of data science. Where do we go from computational notebooks if our projects require multiple interconnected scripts and dynamic documents? How do we ensure reproducibility amidst growing complexity of analyses and operations?

I will use a concrete analytical example to demonstrate how constructing workflows for reproducible analyses can serve as a next step from computational notebooks towards creating a software. First, I will demonstrate a reproducible graphing system designed for the IPDLN-2018 hackathon, organized by Statistics Canada. The system evaluates synthetic socioeconomic and mortality data with logistic regression. Then I will discuss the workflow of the project that implements this graphing system ( ) and the RStudio + GitHub setup that hosts it. I will conclude by building the case to prefer reproducible workflows with version control over computational notebooks (e.g. Jupyter, R Notebook).

October 17, 2018: When Writing It Down is Not Enough – the Era of Computational Notebooks

Dr. Neil Ernst, Department of Computer Science, University of Victoria
October 17, 2018
3pm to 4pm
ECS 660
Slides can be found here

The lab book has always been an indispensable part of scientific inquiry. Although to date nearly all famous discoveries have been documented by hand, on paper, our digital data-centric age means future discoveries are almost certain to be captured digitally. Digital notebooks have produced a remarkable shift in how scientists work in the lab and the field. And yet these new media come with many unknowns and vulnerabilities. In this talk I will outline how notebooks have been used historically, illustrating the many strengths of the analog approach to capturing scientific inquiry. Then I will introduce digital notebooks like Jupyter and R Notebooks. After explaining and demonstrating their features, I will illustrate their strengths and weaknesses. I will outline some of the research my students and I are conducting into some of these challenges, including notebook provenance, notebook testing, and notebook usability.

November 21, 2018: Panel on Ethics and Applied Data Science

Moderator: Margaret-Anne Storey, Matrix Co-Director, University of Victoria
November 21, 2018
3pm to 4:30pm
ECS 660

The panelists:

  • Helen Fotos, Principal Consultant, Metronome Consulting
  • Evert Lindquist, Professor, School of Public Administration, University of Victoria
  • Dimitri Marinakis, Principal, Kinsol Research Inc.
  • Nishant Mehta, Assistant Professor, Department of Computer Science, University of Victoria
  • Jorin Weatherston, Master’s student, Department of Computer Science, University of Victoria

September 10, 2018: Deep Learning, Artificial Evolution and Novel AI Behaviors

Dr. Vadim Bulitko, Department of Computing Science, University of Alberta
Monday Sept 10, 2018
1:30pm  to 2:30pm
ECS 660

Artificial Intelligence is rapidly entering our daily life in the form of smartphone assistants, self-driving cars, etc. While such AI assistants can make our lives easier and safer, there is a growing interest in understanding how long they will remain our intellectual servants. With the powerful applications of self-training and self-learning (e.g., the recent work by Deep Mind on self-learning to play several board games at a championship level), what behaviors will such self-learning AI agents learn? Will there be genuine knowledge discoveries made by them? How much understanding of their novel behavior will we, as humans, be able to gather?

April 19, 2018: Kickoff Symposium
Our kickoff symposium was held on April 19, 2018 at the University of Victoria. The symposium saw attendance by 43 UVic faculty from 16 departments across the campus, 10 industry representatives from regional companies, 23 leadership advisory board members (from BC government, industry, academia and funding agencies). The symposium showcased a keynote presentation by Patrice Simard of Microsoft Research on “Machine Learning” and a number of research briefs and industry talks (attended by 40 students). We had breakout sessions to discuss future institute goals and activities. We learned that academic members join the institute to form new academic collaborations and partnerships with industry and governments, and to gain a broader view of research. Industry members also seek collaborations with other industry and academic members, and to gain access to expertise and training opportunities.