Today in Indian History

by Sahaj Sankaran (Mon, Wed, Fri, Sun)

I'm Sahaj Sankaran, winner of Yale’s South Asian Studies Prize and Diane Kaplan Memorial Prize for my historical research, and this is Today in Indian History. Four days a week, I'll dig into the context and consequences of an event in India's history that happened on that date. I'll walk you through what happened, what the world around looked like at the time, and how it shaped the India we live in today.

Snippet from August 23, 2020: The Lok Sabha Passes the MGNREGA

On 23 August, 2005, the Lok Sabha, the Indian lower house of Parliament, passed the Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA). The act was a sweeping piece of legislation meant to guarantee steady employment, and wages, for millions of workers in rural areas of the country…

Continue reading


Segfault

with Soham Sankaran (twice monthly)

Computer Science is composed of many different areas of research, such as Algorithms, Programming Languages, and Cryptography. Each of these areas has its own problems of interest, publications of record, idioms of communication, and styles of thought.

Segfault is a podcast series that serves as a map of the field, with each episode featuring discussions about the core motivations, ideas and methods of one particular area, with a mix of academics ranging from first year graduate students to long tenured professors.

I’m your host, Soham Sankaran, the founder of Pashi, a start-up building software for manufacturing. I'm on leave from the PhD program in Computer Science at Cornell, where I work on distributed systems and robotics, and I started Segfault to be the guide to CS research that I desperately wanted when I was just starting out in the field.

Audio from Episode 2: Computer Vision with Professor Bharath Hariharan
Featuring Professor Bharath Hariharan of Cornell University.

Bharath Hariharan: The thing that I as an undergraduate got really excited by was another paper that was used in this – SIFT. SIFT is Scale-Invariant Feature Transform. It has a few key ideas, very well evaluated, very well motivated. It was, I think, 2001 or 2002 was when it came out, and we’re still writing papers trying to beat SIFT. SIFT is still a baseline for us. I read SIFT as an undergraduate, and I thought ‘Wow. This is what I want to do.’ That was what kind of started the whole thing.

Soham Sankaran: Ok. Explain SIFT.

Bharath: So the fundamental problem SIFT was trying to tackle is that… you have two views of the same object, but they might be from very different angles, the object may appear very differently. How do we match them? There’s two parts to the SIFT paper. One component is detecting these key points, parts of the object that are distinctive enough that you can use for matching. The second is description. How do you describe these patches so you can match them reliably across two scenes? There are challenges in both, but the key way the paper describes it, which is a very useful way and is the way we describe it now in our courses, is that there are a set of invariances you want. There are certain transformations that might relate these two images. Those transformations should not cause your system to fail. So one transformation they were looking at was scale transformations – one view might be zoomed in, another might be zoomed out. The other is in-plane rotations – 2D rotations of the image, for example. The third is 3D rotations, but to a limited extent. 3D rotations are hard because you don’t know the 3D structure, you just have the image. But if you are careful about it, then small amounts of 3D rotation you can tolerate. So what they did was, they created this feature description and detection pipeline, where they reasoned about how exactly to introduce these invariances. So if what I want is scale invariance, what I should be doing is running my system at many different scales, identifying the optimal scale. That way, no matter which scale the object appears in, I’ll find it.

Soham: So sort of brute-forcing it, in some sense.

Bharath: In some sense, right. The other idea is, if I want to do invariance in perspective changes or 3D deformations, then what I need is something more sophisticated, which is discretization, quantization, binning, histogramming, those ideas. The combination of these two, search across a variety of transformations and intelligent quantization and histogramming, was something that SIFT introduced. Those ideas kept repeating in various forms in various feature extraction techniques, all the way up till neural networks.

Read the full transcript and show notes

. . .

Segfault Podcast RSS Feed / Subscribe using Apple Podcasts / Subscribe using Google Podcasts

Learn how to copy the RSS feed into your favourite podcast player here


Kernels of Truth

by Ethan Weinberger (Thurs)

I'm Ethan Weinberger. After a brief stint in the hedge fund world I'm now a Ph.D student in machine learning at the University of Washington.

The world of AI has had some real breakthroughs mixed with massive amounts of cash and wildly speculative claims - a perfect recipe for BS. Kernels of Truth takes a deep dive into recent work in the field to determine whether reality matches up with the hype.

Snippet from July 14, 2020: RoboCop and You: Why Facial Recognition Discriminates

Machine learning research moves fast - fast enough that we’re still very far from having any kind of strict code of ethics or set of regulations as in medicine. As a result, pseudoscience-esque research has a tendency to crop up every now and then, as it did two weeks ago when Harrisburg University issued a now-deleted press release praising one of its research groups for developing an algorithm to predict a person’s “criminality”…

Continue reading


What is Honesty Is Best?

We find ourselves living in interesting times. This is a moment of great pain, incredible uncertainty, and collapsing realities — fertile soil for new ideas, new paths, and new institutions. Honesty Is Best brings people together to think about how we got here and to explore what we should do next in order to build a fundamentally better world on the uneven foundations upon which we are perched.

We will play host to a number of regular series about technology, policy, and culture spanning writing, podcasts, and video. Each of these series will be written or anchored by one or two people working actively in the specific area the series is about. The distinct style of each series will reflect that of its creators, with the common threads being a focus on concrete ideas and a commitment to telling the unvarnished truth as they see it.

We invite to explore and subscribe to our three current offerings:

Today in Indian History, a four-times weekly series about the context and consequences of events in India’s past written by Sahaj Sankaran, winner of Yale’s South Asian Studies Prize and Diane Kaplan Memorial Prize for his work in Indian history

Segfault, a twice-monthly podcast about Computer Science research hosted by Soham Sankaran, the founder of Pashi and a PhD student in Computer Science at Cornell

Kernels of Truth, a weekly series taking a deeper dive into recent hyped-up developments in artificial intelligence by Ethan Weinberger, a PhD student in machine learning at the University of Washington.

Take a look at some recent work from Honesty Is Best, or subscribe via email for updates from all our series below: