Now Word Problems Can Be Solved In Computers[/b]
When I was first introduced to the computers years ago, I wasn't impressed.
All I wanted to know whether the machine would provide me answers to the questions I wanted to know directly. Will I be able to know all the answers of the questions I have typed in. The answer then, was no. It was not possible at that time but today engineers at MIT have turned it into a yes. A team of researchers at MIT's Computer Science and Artificial Intelligence Laboratory along with a team from University of Washington has developed complex algorithms that are capable of solving word problems directly. As of now, the system can correctly answer the algebra problems and is likely to evolve to solve physics and chemistry problems as well. You don't have to break down the question into parts or make the computer understand it; the computer does it all by itself. Students work has been done easy by these facilities. They can work easily and it’s a lot of help to them in their homework.
The system has been developed by Nate Kushman along with Regina Barzilay and team. In order to make the computer understand the word problems, the researchers relied on two existing computational tools Macsyma and a sentence parser. Macsyma identifies the structure of the word problem and converts it into a template. The sentence parser ensures the meanings of the words are correctly put into context by relating them. For example, if the system encounters phrase reacts with, it gets a hint that the problem is related to Chemistry. If the question has words like costs or price, the system thinks that the problem is likely to involve money related calculations.
For the researchers' system, understanding a word problem is a matter of correctly mapping elements in the parsing diagram of its constituent sentences onto one of Macsyma's equation templates. To teach the system how to perform that mapping, and to produce the equation templates, the researchers used machine learning. Researchers found a website on which algebra students posted word problems they were having difficulty with, and where their peers could then offer solutions. From an initial group of roughly 2,000 problems, they culled 500 that represented the full range of problem types found in the larger set. In a series of experiments, the researchers would randomly select 400 of the 500 problems, use those to train their system, and then test it on the remaining 100.
For the training, however, they used two different approaches. In the first approach, they fed the system both word problems and their translations into algebraic equations 400 examples of each. But in the second, they fed the system only a few examples of the five most common types of word problems and their algebraic translations. The rest of the examples included only the word problems and their numerical solutions. In the first case, the system, after training, was able to solve roughly 70 percent of its test problems; in the second, that figure dropped to 46 percent.
But according to Nate Kushman, an MIT graduate student in electrical engineering and computer science and lead author on the new paper, that's still good enough to offer hope that the approach could generalise to more complex problems.

When I was first introduced to the computers years ago, I wasn't impressed.
All I wanted to know whether the machine would provide me answers to the questions I wanted to know directly. Will I be able to know all the answers of the questions I have typed in. The answer then, was no. It was not possible at that time but today engineers at MIT have turned it into a yes. A team of researchers at MIT's Computer Science and Artificial Intelligence Laboratory along with a team from University of Washington has developed complex algorithms that are capable of solving word problems directly. As of now, the system can correctly answer the algebra problems and is likely to evolve to solve physics and chemistry problems as well. You don't have to break down the question into parts or make the computer understand it; the computer does it all by itself. Students work has been done easy by these facilities. They can work easily and it’s a lot of help to them in their homework.
The system has been developed by Nate Kushman along with Regina Barzilay and team. In order to make the computer understand the word problems, the researchers relied on two existing computational tools Macsyma and a sentence parser. Macsyma identifies the structure of the word problem and converts it into a template. The sentence parser ensures the meanings of the words are correctly put into context by relating them. For example, if the system encounters phrase reacts with, it gets a hint that the problem is related to Chemistry. If the question has words like costs or price, the system thinks that the problem is likely to involve money related calculations.
For the researchers' system, understanding a word problem is a matter of correctly mapping elements in the parsing diagram of its constituent sentences onto one of Macsyma's equation templates. To teach the system how to perform that mapping, and to produce the equation templates, the researchers used machine learning. Researchers found a website on which algebra students posted word problems they were having difficulty with, and where their peers could then offer solutions. From an initial group of roughly 2,000 problems, they culled 500 that represented the full range of problem types found in the larger set. In a series of experiments, the researchers would randomly select 400 of the 500 problems, use those to train their system, and then test it on the remaining 100.
For the training, however, they used two different approaches. In the first approach, they fed the system both word problems and their translations into algebraic equations 400 examples of each. But in the second, they fed the system only a few examples of the five most common types of word problems and their algebraic translations. The rest of the examples included only the word problems and their numerical solutions. In the first case, the system, after training, was able to solve roughly 70 percent of its test problems; in the second, that figure dropped to 46 percent.
But according to Nate Kushman, an MIT graduate student in electrical engineering and computer science and lead author on the new paper, that's still good enough to offer hope that the approach could generalise to more complex problems.