EmTech Digital 2019 (SF)

I spent the past two days at the EmTech Digital conference in SF.

Here are a few key takeaways:

1. Two of the biggest challenges to AI technologies are (a) Dataset and Algorithm Biases and (2) Adversarial attacks and Deep fakes. Here is a paper from Microsoft which shows how biases are identified and handled in one way using simulated data and bayesian optimization of parameters (such as skin tone, age) in a face detection problem. Here is a report on Open-AIs GPT-2 Algorithm's performance and why OpenAI is not releasing the full version of GPT-2. 

2. Examples of #AIFails: Amazon Rekognition falsely classified 28 congressmen as criminals with a disproportionate number of people of color wrongly classified. IBM and Amazon are collaborating with police (NYPD) for providing face surveillance software. But the above example with Amazon Rekognition shows that we are not ready for large-scale deployment of face surveillance software. 

3. Microsoft and Google are working on Federated Learning: From Google IA blog: For models trained from user interaction with mobile devices, we're introducing an additional approach: Federated Learning. Federated Learning enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on device, decoupling the ability to do machine learning from the need to store the data in the cloud. It works like this: your device downloads the current model, improves it by learning from data on your phone, and then summarizes the changes as a small focused update. Only this update to the model is sent to the cloud, using encrypted communication, where it is immediately averaged with other user updates to improve the shared model. All the training data remains on your device, and no individual updates are stored in the cloud.
 

4. Differential Privacy was mentioned as a method some companies such as Uber are taking to protect consumer privacy. 

5. Reinforcement Learning with Imagined Goals (inspired from the way babies learn e.g. setting simple goals for themselves such as reaching) may take us closer to achieving Artificial General Intelligence: See paper

6. Several companies such as Samsung are focusing on AI at the edge (i.e., AI that runs on the device with data captured on the device and do not interface with the cloud) to address the issues of data privacy.

7. Kebotix, a company that focuses on discovering new molecules (for chemistry, pharma, etc.) is using latent space interpolation between 2 known molecules. Latent space interpolation seems like a very interesting concept for aesthetically merging or interpolating between two images/input data.

Below are summaries and screen shots of some of the talks.

[1] Harry Schum (Microsoft)

Harry spoke about democratization of AI (i.e., making AI more accessible to everyone) and how Microsoft has an ethics committee which reviews every Microsoft AI project in depth, decides if it meets a certain ethics standard and then either releases or stops it. When asked to give an example of a Microsoft project/product which was scrapped because it did not meet the ethics standards set by them: no reply. 

Below are a few slides from his talk-












How to deal with Deepfakes? Looking for the source of the article may help.



[2] Dawn Song (Oasis labs)

Dawn Song is working on security and privacy issues in systems, software, networking and databases. They are using a number of things including differential privacy. Differential privacy can be done on large datasets by adding some noise generate using a Laplace function. Read up more on it. It's pretty cool. 

Netflix Prize was a contest held in 2006 where Netflix made available a large anonymized a large dataset of Netflix User ratings and released it for its contest. Researcher Arvind Narayanan and Vitaly Shmatikov showed that the dataset could be deanonymized using some additional information from imdb. This shows that privacy goes far beyond simple data anonymization. 















[3] Sergey Levine [Assistant Professor, UC Berkeley]

Robots that learn by doing. Emphasis on Imagined Goals. 







[4] Chat with Fei-Fei Li [Stanford University]

Human-centered AI Institute at Stanford (Stanford HAI). There is a lot of emphasis on healthcare.
They have a project aimed at using AI for helping the ageing population. 
Runs a program called AI4ALL for high school kids. 



[5] Chazz Sims [WISE SYSTEMS]

AI in logistics and Transportation.





[5] Tanya Mishra [Affectiva]

Rosalind Picard (who is the founder of Empatica) is the founder of Affectiva. Follow their blog for more. 






[6] Jill  [Kebotix]

Using latent space interpolation for discovering new molecules.


[7] Daphne Koller [Insitro]

Insitro is on a mission to collect large biological datasets. Daphne thinks that we should have the default option of having patients opt-in to sharing their data for research.


Comments

Popular Posts