About this series

Computer Science is composed of many different areas of research, such as Algorithms, Programming Languages, and Cryptography. Each of these areas has its own problems of interest, publications of record, idioms of communication, and styles of thought.

Segfault is a podcast series that serves as a map of the field, with each episode featuring discussions about the core motivations, ideas and methods of one particular area, with a mix of academics ranging from first year graduate students to long tenured professors.

I’m your host, Soham Sankaran, the founder of Pashi, a start-up building software for manufacturing. I'm on leave from the PhD program in Computer Science at Cornell, where I work on distributed systems and robotics, and I started Segfault to be the guide to CS research that I desperately wanted when I was just starting out in the field.

twitter: @sohamsankaran, website: https://soh.am, email: soham [at] soh [dot] am.



Episode 2: Computer Vision with Professor Bharath Hariharan

featuring Professor Bharath Hariharan of Cornell University

A snippet from the episode:

Bharath Hariharan: The thing that I as an undergraduate got really excited by was another paper that was used in this – SIFT. SIFT is Scale-Invariant Feature Transform. It has a few key ideas, very well evaluated, very well motivated. It was, I think, 2001 or 2002 was when it came out, and we’re still writing papers trying to beat SIFT. SIFT is still a baseline for us. I read SIFT as an undergraduate, and I thought ‘Wow. This is what I want to do.’ That was what kind of started the whole thing.

Soham Sankaran: Ok. Explain SIFT.

Bharath: So the fundamental problem SIFT was trying to tackle is that… you have two views of the same object, but they might be from very different angles, the object may appear very differently. How do we match them? There’s two parts to the SIFT paper. One component is detecting these key points, parts of the object that are distinctive enough that you can use for matching. The second is description. How do you describe these patches so you can match them reliably across two scenes? There are challenges in both, but the key way the paper describes it, which is a very useful way and is the way we describe it now in our courses, is that there are a set of invariances you want. There are certain transformations that might relate these two images. Those transformations should not cause your system to fail. So one transformation they were looking at was scale transformations – one view might be zoomed in, another might be zoomed out. The other is in-plane rotations – 2D rotations of the image, for example. The third is 3D rotations, but to a limited extent. 3D rotations are hard because you don’t know the 3D structure, you just have the image. But if you are careful about it, then small amounts of 3D rotation you can tolerate. So what they did was, they created this feature description and detection pipeline, where they reasoned about how exactly to introduce these invariances. So if what I want is scale invariance, what I should be doing is running my system at many different scales, identifying the optimal scale. That way, no matter which scale the object appears in, I’ll find it.

Soham: So sort of brute-forcing it, in some sense.

Bharath: In some sense, right. The other idea is, if I want to do invariance in perspective changes or 3D deformations, then what I need is something more sophisticated, which is discretization, quantization, binning, histogramming, those ideas. The combination of these two, search across a variety of transformations and intelligent quantization and histogramming, was something that SIFT introduced. Those ideas kept repeating in various forms in various feature extraction techniques, all the way up till neural networks.

Read the full transcript and show notes



Episode 1: Programming Languages

featuring Adrian Sampson, Alexa VanHattum, and Rachit Nigam of Cornell's CAPRA group

Adrian Sampson, Alexa VanHattum, and Rachit Nigam of Cornell’s Capra group join me to discuss their differing perspectives on what the research field of Programming Languages (PL) is really about, the value of the PL perspective on problems in Computer Science, and what got them interested in working in the area in the first place. We also talk about some of their recent research work on programming languages for hardware accelerator design.

A snippet from the episode:

Rachit Nigam: PL people, at least in my eyes, don’t do work in isolation or shouldn’t do work in isolation. They should go to a field, for example, in our group we do architecture and hardware abstractions, so we go into a field and we figure out what the abstractions really mean. If you have functions, right – people have had functions for a really long time in every language, but the meaning of functions is not well understood and PL people have been trying to formalise functions for a really long time – understanding them allows you to build more powerful abstractions and think about what your programs really mean. I keep saying ‘what programs really mean’ and I really want to stress this point because once you know what programs really mean, you can do all sorts of cool things like verifying the programs and trying to automagically synthesize parts of your program. But to do any of that, you have to understand what your programs mean, and I think that’s what PL people do fundamentally. They go to a field – they can go to networking, they can go to security, or they can go to architecture – and they can pick a language and figure out what the language is actually trying to say and what ideas it tries to capture.

Soham Sankaran: I see. So in a caricatured way, what you’re doing is going to people and saying ‘Ah! I see what you’re doing but there’s a broader organizational principle to what you could be doing that we can demonstrate to you’.

Rachit Nigam: I think, when you can successfully do it, you can really change fields. I think a lot of, like if you look at the history of computing, it’s a history of languages. When you can really express the languages, you can really express the ideas, and you can build bigger and better stuff quickly. I think you can do it, it’s just hard…

Soham Sankaran’s Y Combinator-backed startup, Pashi, is recruiting a software engineer to do research-adjacent work in programming languages and compilers. If you’re interested, email soham [at] pashi.com for more information.

Read the full transcript and show notes


Subscribe to Segfault

Learn how to copy the RSS feed into your favourite podcast player here


What happened to the old Segfault?

In the ancient times of 2015, my friend Eric Lu and I started Segfault as a way for us to publicly shoot the shit about cybersecurity, geopolitics, Taylor Swift, and Worse Is Better. An archive of those episodes is available at segfaultpod.com/old.


What is Honesty Is Best?

We find ourselves living in interesting times. This is a moment of great pain, incredible uncertainty, and collapsing realities — fertile soil for new ideas, new paths, and new institutions. Honesty Is Best brings people together to think about how we got here and to explore what we should do next in order to build a fundamentally better world on the uneven foundations upon which we are perched.

We will play host to a number of regular series about technology, policy, and culture spanning writing, podcasts, and video. Each of these series will be written or anchored by one or two people working actively in the specific area the series is about. The distinct style of each series will reflect that of its creators, with the common threads being a focus on concrete ideas and a commitment to telling the unvarnished truth as they see it.

We invite to explore and subscribe to our three current offerings:

Today in Indian History, a four-times weekly series about the context and consequences of events in India’s past written by Sahaj Sankaran, winner of Yale’s South Asian Studies Prize and Diane Kaplan Memorial Prize for his work in Indian history

Segfault, a twice-monthly podcast about Computer Science research hosted by Soham Sankaran, the founder of Pashi and a PhD student in Computer Science at Cornell

Kernels of Truth, a weekly series taking a deeper dive into recent hyped-up developments in artificial intelligence by Ethan Weinberger, a PhD student in machine learning at the University of Washington.

Take a look at some recent work from Honesty Is Best, or subscribe via email for updates from all our series below: