Reshaping AI-Enabled Discovery and Understanding from the Molecular to the Media
More than 415 guests registered to attend the three Distinguished Speaker Series events NNCI hosted this month
AI is reshaping scientific discovery and redefining how knowledge is created—from journalism to materials synthesis, drug discovery, and approximation algorithms for understanding complex phenomena.
More than 415 researchers, students, and practitioners registered to attend three recent events exploring these themes, hosted by the Northwestern Network for Collaborative Intelligence (NNCI).
Through the Distinguished Speaker Series, NNCI sparks interdisciplinary and cross-functional dialogue and illuminates emerging ideas around responsible and high-impact research and education in data science and AI.

“AI is transforming numerous disciplines and industry verticals every day,” said V.S. Subrahmanian, Walter P. Murphy Professor of Computer Science at Northwestern Engineering, faculty fellow at the Northwestern Roberta Buffett Institute for Global Affairs, and director of the Northwestern Security and AI Lab. “We are thrilled to bring the world’s foremost scholars in AI and its applications to Northwestern.”
Subrahmanian and Abel Kho are the founding codirectors of NNCI. Kho is a professor of medicine and preventive medicine at Northwestern University Feinberg School of Medicine and director of the Institute for Artificial Intelligence in Medicine.
“While even the experts may not have all the answers, their expertise helps us to see and prioritize the promising opportunities and pressing questions upon which to focus,” Kho said.
AI and The Future of News
For NNCI’s first Distinguished Panel event on Feb. 2, professor of computer science Larry Birnbaum moderated a discussion examining how generative technologies are reshaping reporting, distribution, and public trust. The panelists included Emily Withrow, senior vice president of AI products and platforms at The New York Times; Nicholas Diakopoulos, professor of communication studies; and Jeremy Gilbert, Knight Chair for Digital Media Strategy.

From AI-assisted news gathering and document analysis, to personalization that supports comprehension rather than fragmentation, the discussion underscored a clear imperative: building the technical and editorial infrastructure—evaluation methods, standards, and governance—required to deploy AI responsibly in journalism.
Drawing a distinction between automation and understanding, panelists emphasized promising AI applications that help audiences navigate complexity. The group also spoke candidly about the limits of current generative systems, namely their inability to meet the non-negotiable newsroom standard of accuracy. Diakopoulos noted that AI’s most immediate value may be in stronger upstream news gathering. He explained that AI is opening new avenues for scaling up the ability to monitor information ecosystems, identify trends and anomalies, and analyze documents.
Generative AI for Molecules and Materials
On Feb. 5, NNCI welcomed a panel of leading Northwestern Engineering materials science and synthetic biology researchers. Christopher Schuh, Dean of the McCormick School of Engineering and John G. Searle Professor of Materials Science and Engineering, moderated the discussion with Julius Lucks, Margery Claire Carlson Professor of Chemical and Biological Engineering; and Chris Wolverton, Frank C. Engelhart Professor of Materials Science and Engineering. Featuring complementary perspectives from computation, experimentation, and biological systems engineering, they explored how generative AI is reshaping discovery, design, and scientific practice at the molecular scale.

Emphasizing how AI is transforming the very meaning of “discovery,” the panelists noted that computational predictions now vastly outnumber experimentally realized materials, raising foundational questions about novelty, validation, and scientific understanding. And, while AI represents a step-change in biological understanding and design, they agreed that the synthesis of AI-powered predictive compounds remains a critical bottleneck, especially as the frontier moves toward a function-first approach.
The conversation also grappled with the broader implications of building the technical, experimental, and institutional infrastructure needed to ensure that AI-driven science is not only fast, but rigorous and responsible—from the role of physical constraints in learning systems to intellectual property, dual-use concerns, and how AI may ultimately reshape the work of scientists themselves.
Big-Data Algorithms That Are Not Machine Learning
At NNCI’s standing-room-only event on Feb. 10, Jeffrey Ullman presented a set of elegant and powerful algorithmic ideas that make sense precisely when data is too large for exact solutions, including locality-sensitive hashing, approximate counting, sampling, and counting triangles in graphs.

A foundational figure in computer science, Ullman is a recipient of the ACM A.M. Turing Award, commonly referred to as the “Nobel Prize in Computing.” He is the Stanford W. Ascherman Professor of Engineering (Emeritus) at Stanford University and CEO of Gradiance Corp. During his talk, Ullman reflected on the current moment in AI—what machines can automate, what remains deeply human, and why algorithmic insight continues to matter even as code generation and large models advance rapidly.
Many of today’s most pressing data challenges and problems at scale, Ullman explained, are solvable by carefully designed approximations that leverage structure, randomness, and theory. Trading exactness for speed, classical algorithmic ideas produce strong approximations at a fraction of the computational cost of other methods.