Cornell Bowers College of Computing and Information Science

Bowers Distinguished Speaker Series

The Cornell Ann S. Bowers College of Computing and Information Science is excited to announce its lineup for the 2024-2025 Bowers Distinguished Lecture Series. Join us as industry leaders and innovators share their insights and experiences, offering a glimpse into the future of tech.

2023-2024 Bowers Distinguished Lecture Series

 

Paul England

Distinguished Engineer at Microsoft

A color photo of a man smiling for a photo
Paul England

12/4/2024

Time: TBD

Location: TBD

Bio: Paul England has led or contributed to many of the computer industry’s hardware-based security innovations in the last 20 years.  Most notable is the field of Confidential Computing: a combination of novel cryptographic operations and hardware/software environments for secure computation.  Confidential computing primitives are now a feature of most modern computer systems, and the field remains an area of active research.  Paul led the development of the TPM security-processor specification and reference implementation, as well as advocating for and co-designing many of the silicon security features we use today (TrustZone, various kinds of secure and authenticated boot, secure enclave technologies, etc.)  Paul was elected to the National Academy of Engineering for this work.

Most recently, Paul led Microsoft’s work in content provenance – essentially standards for signing digital media and metadata. This led to his interest in the challenges of AI-generated misinformation.

Paul recently retired from Microsoft after 28 years and has founded Datica Research with the goal of making confidential computing easier to use.

Title: Real or Fake?  Technological Approaches to Combating Misinformation

Abstract: Generative AI can create images, video, and audio that are nearly indistinguishable from reality as well as text that is comparable or better than that authored by a human.  Gen-AI tools and services are starting to power entertainment - creating fun and engaging content - and are reducing business costs.  Unfortunately, Gen-AI can also be used for harm: targeting individuals, companies, and societies with compelling and damaging misinformation – cheaply, and at scale.  Technologists and regulators are responding by creating tools and regulations to separate “real” from “fake” to mitigate potential harm.  While this is well intentioned, it is probably futile and may even hurt.

This talk will review the state of this arms-race in 2024: how Gen-AI is already causing harm, and why the technical mitigations that are being tried probably won’t work.  I’ll conclude with some risky speculation about how people and governments might adapt and respond to living in a sea of AI-generated misinformation.

 

Julie Cohen

Georgetown University Law Center

A color photo Julie Cohen (Photo via Georgetown University Law Center)
Julie Cohen

4/17/2025

Time: TBD

Location: TBD

Bio: Julie E. Cohen is the Mark Claster Mamolen Professor of Law and Technology at the Georgetown University Law Center. She teaches and writes about surveillance, privacy and data protection, intellectual property, information platforms, and the ways that networked information and communication technologies are reshaping legal institutions. She is the author of Between Truth and Power: The Legal Constructions of Informational Capitalism (Oxford University Press, 2019); Configuring the Networked Self: Law, Code and the Play of Everyday Practice (Yale University Press, 2012), which won the 2013 Association of Internet Researchers Book Award and was shortlisted for the Surveillance & Society Journal’s 2013 Book Prize; and numerous journal articles and book chapters.

Title: TBD

Abstract: TBD