The use of computers to prove mathematical theorems using formal logic emerged as the field of automated theorem proving in the 1950s. Shaw, as well as algorithmic methods, such as the resolution principle developed by John Alan Robinson.
It included the use of heuristic methods designed to simulate human problem solving, as in the Logic Theory Machine, developed by Allen Newell, Herbert A. In addition to its use for finding proofs of mathematical theorems, automated theorem-proving has also been used for program verification in computer science.
There are two different types of problems, ill-defined and well-defined: different approaches are used for each.
Well-defined problems have specific goals and clear expected solutions, while ill-defined problems do not.
In these disciplines, problem solving is part of a larger process that encompasses problem determination, de-duplication, analysis, diagnosis, repair, and other steps.
Other problem solving tools are linear and nonlinear programming, queuing systems, and simulation.
Much of computer science involves designing completely automatic systems that will later solve some specific problem -- systems to accept input data and, in a reasonable amount of time, calculate the correct response or a correct-enough approximation.
In addition, people in computer science spend a surprisingly large amount of human time finding and fixing problems in their programs -- debugging.
Finally a solution is selected to be implemented and verified.
Problems have a goal to be reached and how you get there depends upon problem orientation (problem-solving coping style and skills) and systematic analysis.