Neural networks have been used for all kind of AI tasks, starting from photo recognition to natural-sounding machine translation.
Stephes Ornes, writing for Quanta Magazine:
Neural nets train on large data sets — the larger the better — and use statistics to make very good approximations. In the process, they learn what produces the best outcomes. Language translation programs particularly shine: Instead of translating word by word, they translate phrases in the context of the whole text.
However, neural networks have always lagged in one conspicuous area: solving difficult symbolic math problems. These include the hallmarks of calculus courses, like integrals or ordinary differential equations. The hurdles arise from the nature of mathematics itself, which demands precise solutions. Neural nets instead tend to excel at probability. They learn to recognize patterns — which Spanish translation sounds best, or what your face looks like — and can generate new ones.
It is almost like tasks that don't require creativity are where neural networks fail.
Last year Guillaume Lample and François Charton, a pair of computer scientists working in Facebook’s AI research group in Paris, presented a breakthrough approach to teaching AI to do symbolic math: and it is through teaching the computer to "speak" math.
To allow a neural net to process the symbols like a mathematician, Charton and Lample began by translating mathematical expressions into more useful forms. They ended up reinterpreting them as trees — a format similar in spirit to a diagrammed sentence. Mathematical operators such as addition, subtraction, multiplication and division became junctions on the tree. So did operations like raising to a power, or trigonometric functions. The arguments (variables and numbers) became leaves. The tree structure, with very few exceptions, captured the way operations can be nested inside longer expressions.
Read in detail on Quanta Magazine.