Wednesday, June 29, 2016

MegaFace / Challenge

http://www.washington.edu/news/2016/06/23/how-well-do-facial-recognition-algorithms-cope-with-a-million-strangers/
http://megaface.cs.washington.edu/participate/challenge.html

For current results see the leaderboard!

Get Started

Experiment

  • Identification and Verification
    1. Download MegaFace and FaceScrub datasets and development kit
    2. Run your algorithm to produce features for both datasets
    3. Run our experiment script with 10, 100, 1000, 10000, 100000, 1000000 distractors
    4. Upload results into the google drive folder you received with access information. Please also upload links to features files for the full FaceScrub and MegaFace datasets
    More information about the experiment and the development kit files can be found in the development kit readme

Necessary Files

  • Datasets
  • Training Set
    • You may train with any set except for FaceScrub, MegaFace, and FGNET
    • Some systems are trained on millions of people, and others on several thousands. One of our goals is to compare face recognition algorithms independent of the training data. Thus please specify # of training photos and # of unique people you used for training. The results will be tiered accordingly, e.g., if you trained on 1K photos you won’t compete with groups that were trained on 1M photos.
  • Linux Development Kit (.zip) (.tar.gz) — 16 MB
  • Required Others
    • OpenCV (link)
      Open source computer vision and machine learning software library

Frequently Asked Questions

  • What should we do if we cannot detect a face in some photos?
    • If you cannot detect a face in a photo then you should use our landmarks provided in the json files.
    • Landmarks meaning: landmark 0 is center of the right eye, 1-center of the left eye, 2-tip of the nose.
    • In case one or more of the landmarks is missing it means that the point is occluded in the photo.
  • Why are there fewer FaceScrub feature files in your features than in the whole set?
    • We use a subset of FaceScrub for initial tests (to speed up the testing) but will use the full FaceScrub for additional tests, please compute features for the full set.
  • Do we need to submit our features for the full Megaface and Facescrub?
    • Yes, please submit a link to all the megaface features and all facescrub features.

Please cite the paper if you use our code, results, or dataset in a publication (link)

Optional Files

  • Our Feature Files


Monday, June 27, 2016

How to Start Learning Deep Learning

http://ofir.io/How-to-Start-Learning-Deep-Learning/

Ofir Press

Sometimes deep sometimes learning

How to Start Learning Deep Learning

Due to the recent achievements of artificial neural networks across many different tasks (such as face recognition, object detection and Go), deep learning has become extremely popular. This post aims to be a starting point for those interested in learning more about it.
If you already have a basic understanding of linear algebra, calculus, probability and programming: I recommend starting with Stanford’s CS231n. The course notes are comprehensive and written well. The slides for each lesson are also available, and even though the accompanying videos were removed from the official site, re-uploads are quite easy to find online.
If you dont have the relevant math background: There is an incredible amount of free material online that can be used to learn the required math knowledge. Gilbert Strang’s course on linear algebra is a great introduction to the field. For the other subjects, edX has courses from MIT on both calculus and probability.
If you are interested in learning more about machine learning: Andrew Ng’s Coursera class is a popular choice as a first class in machine learning. There are other great options available such as Yaser Abu-Mostafa’s machine learning course which focuses much more on theory than the Coursera class but it is still relevant for beginners. Knowledge in machine learning isn’t really a prerequisite to learning deep learning, but it does help. In addition, learning classical machine learning and not only deep learning is important because it provides a theoretical background and because deep learning isn’t always the correct solution.
CS231n isn’t the only deep learning course available online. Geoffrey Hinton’s Coursera class “Neural Networks for Machine Learning” covers a lot of different topics, and so does Hugo Larochelle’s “Neural Networks Class”. Both of these classes contain video lectures. Nando de Freitas also has a course available online which contains videos, slides and also a list of homework assignments.
If you prefer reading over watching video lectures: Neural Networks and Deep Learning is a free online book for beginners to the field. The Deep Learning Book is also a great free book, but it is slightly more advanced.
Where to go after you’ve got the basics:
  • Computer Vision is covered by most, if not all, of the deep learning resources mentoined above.
  • Recurrent Neural Networks (RNNs) are the basis of neural network based models that solve tasks related to sequences such as machine translation or speech recognition. Andrej Karpathy’s blog post on RNNs is a great place to start learning about them. Christopher Olah has a great blog where many deep learning concepts are explained in a very visual and easy to understand way. His post on LSTM networks is an introduction to LSTM networks which are a wildly used RNN variant.
  • Natural Language Processing (NLP): CS224d is an introduction to NLP with deep learning. Advanced courses are available from both Kyunghyun Cho (with lecture notes here) and Yoav Goldberg.
  • Reinforcement Learning: If you’d like to control robots or beat the human champion of Go, you should probably use reinforcement learning. Andrej Karpathy’s post on deep reinforcement learning is an excellent starting point. David Silver also recently published a short blog post introducing deep reinforcement learning.
Deep learning frameworks: There are many frameworks for deep learning but the top three are probably Tensorflow (by Google), Torch (by Facebook) and Theano (by MILA). All of them are great, but if I had to select just one to recommend I’d say that Tensorflow is the best for beginners, mostly because of the great tutorials avialable.
If you’d like to train neural networks you should probably do it on a GPU. You dont have to, but its much faster if you do. NVIDIA cards are the industry standard, and while most research labs use $1000 dollar graphics cards, there are a few affordable cards that can also get the work done. An even cheaper option is to rent a GPU-enabled instance from a cloud server provider like Amazon’s EC2 (short guide here).
Good luck!
Written on June 26, 2016

Friday, June 17, 2016

What’s Next for Artificial Intelligence



http://www.wsj.com/articles/whats-next-for-artificial-intelligence-1465827619

The best minds in the business—Yann LeCun of Facebook, Luke Nosek of the Founders Fund, Nick Bostrom of Oxford University and Andrew Ng of Baidu—on what life will look like in the age of the machines

 HOW DO YOU TEACH A MACHINE?
Yann LeCun, director of artificial-intelligence research at Facebook, on a curriculum for software
The traditional definition of artificial intelligence is the ability of machines to execute tasks and solve problems in ways normally attributed to humans. Some tasks that we consider simple—recognizing an object in a photo, driving a car—are incredibly complex for AI. Machines can surpass us when it comes to things like playing chess, but those machines are limited by the manual nature of their programming; a $30 gadget can beat us at a board game, but it can’t do—or learn to do—anything else.
This is where machine learning comes in. Show millions of cat photos to a machine, and it will hone its algorithms to improve at recognizing pictures of cats. Machine learning is the basis on which all large Internet companies are built, enabling them to rank responses to a search query, give suggestions and select the most relevant content for a given user.

Deep learning, modeled on the human brain, is infinitely more complex. Unlike machine learning, deep learning can teach machines to ignore all but the important characteristics of a sound or image—a hierarchical view of the world that accounts for infinite variety. It’s deep learning that opened the door to driverless cars, speech-recognition engines and medical-analysis systems that are sometimes better than expert radiologists at identifying tumors.
Despite these astonishing advances, we are a long way from machines that are as intelligent as humans—or even rats. So far, we’ve seen only 5% of what AI can do.
IS IT TIME TO RETHINK YOUR CAREER?
Andrew Ng, chief scientist at Chinese Internet giant Baidu, on how AI will impact what we do for a living
Truck driving is one of the most common occupations in America today: Millions of men and women make their living moving freight from coast to coast. Very soon, however, all those jobs could disappear. Autonomous vehicles will cover those same routes in a faster, safer and more efficient manner. What company, faced with that choice, would choose expensive, error-prone human drivers?
There’s a historical precedent for this kind of labor upheaval. Before the Industrial Revolution, 90% of Americans worked on farms. The rise of steam power and manufacturing left many out of work, but also created new jobs—and entirely new fields that no one at the time could have imagined. This sea change took place over the course of two centuries; America had time to adjust. Farmers tilled their fields until retirement, while their children went off to school and became electricians, factory foremen, real-estate agents and food chemists.
We’re about to face labor displacement of a magnitude we haven’t seen since the 1930s.
Truck drivers won’t be so lucky. Their jobs, along with millions of others, could soon be obsolete. The age of intelligent machines will see huge numbers of individuals unable to work, unable to earn, unable to pay taxes. Those workers will need to be retrained—or risk being left out in the cold. We could face labor displacement of a magnitude we haven’t seen since the 1930s.
In 1933, Franklin Roosevelt’s New Deal provided relief for massive unemployment and helped kick-start the economy. More important, it helped us transition from an agrarian society to an industrial one. Programs like the Public Works Administration improved our transportation infrastructure by hiring the unemployed to build bridges and new highways. These improvements paved the way for broad adoption of what was then exciting new technology: the car.
We need to update the New Deal for the 21st century and establish a trainee program for the new jobs artificial intelligence will create. We need to retrain truck drivers and office assistants to create data analysts, trip optimizers and other professionals we don’t yet know we need. It would have been impossible for an antebellum farmer to imagine his son becoming an electrician, and it’s impossible to say what new jobs AI will create. But it’s clear that drastic measures are necessary if we want to transition from an industrial society to an age of intelligent machines.
AI: JUST LIKE US?
How intelligent machines could resemble their makers
The next step in achieving human-level ai is creating intelligent—but not autonomous—machines. The AI system in your car will get you safely home, but won’t choose another destination once you’ve gone inside. From there, we’ll add basic drives, along with emotions and moral values. If we create machines that learn as well as our brains do, it’s easy to imagine them inheriting human-like qualities—and flaws. But a “Terminator”-style scenario is, in my view, immensely improbable. It would require a discrete, malevolent entity to specifically hard-wire malicious intent into intelligent machines, and no organization, let alone a single group or a person, will achieve human-level AI alone. Building intelligent machines is one of
The greatest scientific challenges of our times, and it will require the sharing of ideas across countries, companies, labs and academia. Progress in AI is likely to be gradual—and open. —Yann LeCun


HOW TO MASTER THE MACHINES
Nick Bostrom, founding director of the Future of Humanity Institute at Oxford University, on the existential risk of AI. Interviewed by Daniela Hernandez.
Can you tell me about the work you’re doing?
We are interested in the technical challenges related to the “control problem.” How can you ensure that [AI] will do what the programmers intended? We’re also interested in studying the economic, political and social issues that arise when you have these superintelligent AIs. What kinds of political institutions would be most helpful to deal with this transition to the machine- intelligence era? How can we ensure that different stakeholders come together and do something that can lead to a good outcome?
Much of your work has focused on existential risk. How would you explain that to a 5-year-old?
I would say it’s technology that could permanently destroy the entire future for all of humanity. For a slightly older audience, I would say there’s the possibility of human extinction or the permanent destruction of our potential to achieve value in the future.
What are some of the strategies you think will help mitigate the potential existential risks of artificial intelligence?
Work on the control problem could be helpful. By the time we figure out how to make machines really smart, we should have some ideas about how to control such a thing, how to engineer it so that it will be on our side, aligned with human values and not destructive. That involves a bunch of technical challenges, some of which we can start to work on today.
Can you give me an example?
There are different ideas on how to approach this control problem. One line of attack is to study value learning. We would want the AI we build to ultimately share our values, so that it can work as an extension of our will. It does not look promising to write down a long list of everything we care about. It looks more promising to leverage the AI’s own intelligence to learn about our values and what our preferences are.
Values differ from person to person. How do we decide what values a machine should learn?
Well, this is a big and complicated question: the possibility of profound differences between values and conflicting interests. And this is in a sense the biggest remaining problem. If you’re optimistic about technological progress, you’ll think that eventually we’ll figure out how to do more and more.
We will conquer nature to an ever-greater degree. But the one thing that technology does not automatically solve is the problem of conflict, of war. At the darkest macroscale, you have the possibility of people using this advance, this power over nature, this knowledge, in ways designed to harm and destroy others. That problem is not automatically solved.
How might we be able to deal with that tension?
I don’t have a simple answer to that. I don’t think there’s an easy technofix.
Wouldn’t a self-programming agent be able to free itself from the shackles of the control systems under which we place them? Humans do this all the time already, to some extent, when we act selfishly.
The conservative assumption would be that the superintelligent AI would be able to reprogram itself, would be able to change its values, and would be able to break out of any box that we put it in. The goal, then, would be to design it in such a way that it would choose not to use those capabilities in ways that would be harmful to us. If an AI wants to serve humans, it would assign a very low expected utility to an action that would lead it to start killing humans. There are fundamental reasons to think that if you set up the goal system in a proper way, these ultimate decision criteria would be preserved.

LET’S IMPROVE THE MINDS WE HAVE

Luke Nosek, co-founder of PayPal and the Founders Fund, on the need to train our brains before the artificial ones arrive
Earlier this year, the korean Go champion Lee Sedol played a historic five-game match against Google’s AlphaGo, an artificially intelligent computer program. Sedol has 18 world championships to his name. On March 19, 2016, he lost to software.
High-performance computing today is unprecedentedly powerful. Still, we remain stages away from creating an artificial general intelligence with anywhere near the capabilities of the human mind. We don’t yet understand how general, human-level AI (sometimes referred to as AGI, or strong AI) will work or what influence it will have on our lives and economy. The scale of impact is often compared to the advent of nuclear technology, and everyone from Stephen Hawking to Elon Musk to the creator of AlphaGo has advised that we proceed with caution.
The nuclear comparison is charged but apt. As with nuclear technology, the worst-case scenario for strong AI—malevolent superintelligence turns on humanity and tries to kill it—would be globally devastating. Conversely, the optimistic predictions are so blindingly positive (universal economic prosperity, elimination of disease) that we may be biased by both undue fear and optimism.
Strong AI could help billions of people lead safer, healthier, happier lives. But to design this machine, engineers will need a better understanding—greater than that of anyone alive today—of the complex social, neurological and economic realities faced by a society of intelligent humans and machines. And if we upgrade the minds we already have, we’ll be better equipped to conceptualize, build and coexist with strong AI.
We can divide the enhancement of human intelligence into three stages. The first, using technology like Google Search to augment and supplement the human mind, is well under way Compare a fifth-grader with a library card in 1996 to a fifth-grader on the Google home page in 2016—just keystrokes from much of human knowledge.
If stage one involves supplementing the mind with technology, then stage two is about amplifying the mind directly. Adaptive learning software personalizes education and makes adjustments to lessons in real time. If a student is excelling, the pace will increase. If he or she is struggling, the program might slow down, switch teaching styles or signal to the instructor that assistance is needed. Adaptive learning and online education could mean the end of one-size-fits-all education. Integration with virtual and augmented reality could also amplify intelligence in unexpected ways.
Stage three of intelligence enhancement involves a fundamental transformation of the mind. Transcranial magnetic stimulation, or TMS, is a noninvasive, FDA-approved treatment in which an electromagnetic coil is applied to the head. TMS is currently being used to treat post-traumatic stress disorder, autism and drug-resistant major depression. Sample sizes at such facilities as the Brain Treatment Center in Newport Beach, Calif., and the University of Louisville in Kentucky are small and the duration of impact unknown, but high percentages of individuals—up to 90% for a trial with 200 higher-functioning autistic patients—have shown improvement. Initial signs indicate that TMS could be effective for a wide, seemingly unrelated range of neurological conditions. If we can positively affect injured or non-neurotypical brains, we may not be far from improving connections in healthy brains and enhancing intelligence in a generalized way.
Strong AI appears to be on the horizon, but for now the human mind is the only one we have. Enhancing our own intelligence is the first step toward creating—and successfully coexisting with—the intelligent machines of the future.
YOU CAN’T TEACH (MACHINES) COMMON SENSE
At least not yet. And it’s the biggest barrier to true artificial intelligence.
Predictive learning, also called unsupervised learning, is the principal mode by which animals and humans come to understand the world. Take the sentence “John picks up his phone and leaves the room.” Experience tells you that the phone is probably a mobile model and that John made his exit through a door. A machine, lacking a good representation of the world and its constraints, could never have inferred that information. Predictive learning in machines—an essential but still undeveloped feature—will allow AI to learn without human supervision, as children do. But teaching common sense to software is more than just a technical question—it’s a fundamental scientific and mathematical challenge that could take decades to solve. And until then, our machines can never be truly intelligent. —Yann LeCun