How to set up smartphones and PCs. Informational portal
  • home
  • Iron
  • Recursion in logic. Tail recursion and loop

Recursion in logic. Tail recursion and loop

Recursion is the property of an object to mimic itself. An object is recursive if parts of it look the same as the whole object. Recursion is very widely used in mathematics and programming:

  • data structures:
    • a graph (in particular trees and lists) can be thought of as a collection of a single node and a subgraph (smaller graph);
    • the string consists of the first character and a substring (smaller string);
  • design patterns such as . A decorator object can include other objects that are also decorators. McColm Smith studied recursive patterns in detail, highlighting in his book a common design pattern - Recursion;
  • recursive functions (algorithms) call themselves.

The article is devoted to the analysis of the complexity of recursive algorithms, the necessary mathematical information is given, examples are considered. In addition, the possibility of replacing recursion with a cycle, tail recursion, is described.

Examples of recursive algorithms

The recursive algorithm always breaks the problem into parts that are similar in structure to the original problem, but simpler. To solve subtasks, the function is called recursively, and their results are combined in some way. Task division occurs only when it cannot be solved immediately (it is too complex).

For example, the task of processing an array can often be reduced to processing its parts. The division into parts is performed until they become elementary, i.e. simple enough to get the result without further simplification.

Finding an Array Element

Start; search(array, begin, end, element) ; searches for element with value element in array between indexes begin and end if begin > end result:= false; element not found else if array = element result:= true; element found otherwise result:= search(array, begin+1, end, element) end; return result

The algorithm divides the original array into two parts - the first element and an array of other elements. There are two simple cases when separation is not required - all elements are processed or the first element is the desired one.

In the search algorithm, the array could be divided in another way (for example, in half), but this would not affect the efficiency. If the array is sorted, then it is advisable to divide it in half, because at each step, the amount of data to be processed can be cut by half.

Binary search in an array

Binary search is performed on a sorted array. At each step, the required element is compared with the value in the middle of the array. Depending on the results of the comparison, either the left side or the right side can be "discarded".

Start; binary_search(array, begin, end, element) ; searches for an element with a value of element ; in an array sorted in ascending order array ; between indexes begin and end if begin > end end; return false - element not found mid:= (end + begin) div 2; calculation of the index of the element in the middle of the considered part of the array if array = element end; return true (element found) if array< element результат:= binary_search(array, mid+1, end, element) иначе результат:= binary_search(array, begin, mid, element) конец; вернуть результат

Calculating Fibonacci Numbers

Fibonacci numbers are defined by a recursive expression, i.e. such that the calculation of the element of which is expressed from the previous elements: \(F_0 = 0, F_1 ​​= 1, F_n = F_(n-1) + F_(n-2), n > 2\).

Start; fibonacci(number) if number = 0 end; return 0 if number = 1 end; return 1 fib_1:= fibonacci(number-1) fib_2:= fibonacci(number-2) result:= fib_1 + fib_2 end; return result

Quick sort (quick sort)

The quick sort algorithm at each step selects one of the elements (the pivot) and, relative to it, divides the array into two parts, which are processed recursively. Elements smaller than the reference are placed in one part, and the rest are placed in the other.

Flowchart of quicksort algorithm

Merge sort

The merge sort algorithm is based on the ability to quickly combine ordered arrays (or lists) so that the result is ordered. The algorithm splits the original array into two parts in an arbitrary way (usually in half), sorts them recursively, and merges the result. The division occurs as long as the size of the array is greater than one, because an empty array and an array of one element are always sorted.

Block diagram of merge sort

At each merge step, the first raw element is selected from both lists. The elements are compared, the smallest of them is added to the result and marked as processed. The merge continues until one of the lists is empty.

Start; merge(Array1, Size1, Array2, Size2) ; the original arrays are ordered; the result is an ordered array of length Size1+Size2 i:= 0, j:= 0 eternal_loop if i >= Size1 add elements from j to Size2 of Array2 array to the end of the result loop exit if j >= Size2 add elements from i to Size1 of array Array1 to the end of the result loop exit if Array1[i]< Array2[j] результат := Array1[i] i:= i + 1 иначе (если Array1[i] >= Array2[j]) result := Array2[j] j := j + 1 end; return result

Analysis of recursive algorithms

When is calculated the complexity of iterations and their number in the worst, best and average cases . However, it will not be possible to apply this approach to a recursive function, since as a result, a recursive relation will be obtained. For example, for the function of finding an element in an array:

\(
\begin(equation*)
T^(search)_n = \begin(cases)
\mathcal(O)(1) \quad &\text($n = 0$) \\
\mathcal(O)(1) + \mathcal(O)(T^(search)_(n-1)) \quad &\text($n > 0$)
\end(cases)
\end(equation*)
\)

Recurrence relations do not allow us to evaluate the complexity - we cannot simply compare them, and therefore compare the efficiency of the corresponding algorithms. It is necessary to obtain a formula that describes the recurrence relation - a universal way to do this is to select a formula using the substitution method, and then prove that the formula corresponds to the relation by mathematical induction.

Substitution method (iterations)

It consists in sequential replacement of the recurrent part in the expression to obtain new expressions. The substitution is made until the general principle can be grasped and expressed in the form of a non-recursive formula. For example, to find an element in an array:

\(
T^(search)_n = \mathcal(O)(1) + \mathcal(O)(T^(search)_(n-1)) =
2\times\mathcal(O)(1) + \mathcal(O)(T^(search)_(n-2)) =
3\times\mathcal(O)(1) + \mathcal(O)(T^(search)_(n-3))
\)

We can assume that \(T^(search)_n = T^(search)_(n-k) + k\times\mathcal(O)(1)\), but then \(T^(search)_n = T^ (search)_(0) + n\times\mathcal(O)(1) = \mathcal(O)(n)\).

We have derived the formula, but the first step contains an assumption, i.e. there is no proof that the formula corresponds to the recursive expression - the method of mathematical induction allows you to get the proof.

Method of mathematical induction

Allows you to prove the truth of some statement (\(P_n\)), consists of two steps:

  1. proof of the statement for one or more special cases \(P_0, P_1, ...\);
  2. the proof of \(P_(n+1)\) is deduced from the truth \(P_n\) (inductive hypothesis) and special cases.

Let us prove the correctness of the assumption made when estimating the complexity of the search function (\(T^(search)_n = (n+1)\times\mathcal(O)(1)\)):

  1. \(T^(search)_(1) = 2\times\mathcal(O)(1)\) is true from the condition (can be substituted into the original recurrent formula);
  2. let's say true \(T^(search)_n = (n+1)\times\mathcal(O)(1)\);
  3. it is required to prove that \(T^(search)_(n+1) = ((n+1)+1)\times\mathcal(O)(1) = (n+2)\times\mathcal(O) (one)\);
    1. substitute \(n+1\) into the recurrence relation: \(T^(search)_(n+1) = \mathcal(O)(1) + T^(search)_n\);
    2. on the right side of the expression, it is possible to make a replacement based on the inductive hypothesis: \(T^(search)_(n+1) = \mathcal(O)(1) + (n+1)\times\mathcal(O)(1) = (n+2)\times\mathcal(O)(1)\);
    3. assertion is proven.

Often, such a proof is a rather laborious process, but it is even more difficult to identify a pattern using the substitution method. In this regard, the so-called general method is used.

General (basic) method for solving recurrence relations

The general method is not universal, for example, it cannot be used to estimate the complexity of the above algorithm for calculating Fibonacci numbers. However, it applies to all cases where the divide and conquer approach is used:

\(T_n = a\cdot T(\frac(n)(b))+f_n; a, b = const, a \geq 1, b > 1, f_n > 0, \forall n\).

Equations of this kind are obtained if the original task is divided into a subtasks, each of which processes \(\frac(n)(b)\) elements. \(f_n\) is the complexity of the operations of dividing the problem into parts and combining solutions. In addition to the type of relation, the general method imposes restrictions on the function \(f_n\), highlighting three cases:

  1. \(\exists \varepsilon > 0: f_n = \mathcal(O)(n^(\log_b a - \varepsilon)) \Rightarrow T_n = \Theta(n^(\log_b a))\);
  2. \(f_n = \Theta(n^(\log_b a)) \Rightarrow T_n = \Theta(n^(\log_b a) \cdot \log n)\);
  3. \(\exists \varepsilon > 0, c< 1: f_n = \Omega(n^{\log_b a + \varepsilon}), f_{\frac{n}{b}} \leq c \cdot f_n \Rightarrow T_n = \Theta(f_n)\).

The correctness of the assertions for each case is proved formally. The task of analyzing a recursive algorithm is now reduced to determining the case of the main theorem, which corresponds to the recursive relation.

Analysis of the binary search algorithm

The algorithm splits the input data into 2 parts (b = 2), but processes only one of them (a = 1), \(f_n = 1\). \(n^(\log_b a) = n^(\log_2 1) = n^0 = 1\). The task splitting and result linking function grows at the same rate as \(n^(\log_b a)\), so you need to use the second case of the theorem:

\(T^(binarySearch)_n = \Theta(n^(\log_b a) \cdot \log n) = \Theta(1 \cdot \log n) = \Theta(\log n)\).

Search algorithm analysis

The recursive function splits the original task into one subtask (a = 1), the data is divided into one part (b = 1). We cannot use the main theorem to analyze this algorithm, because the condition \(b > 1\) is not satisfied.

To carry out the analysis, the substitution method or the following reasoning can be used: each recursive call reduces the dimension of the input data by one, which means there will be n pieces in total, each of which has complexity \(\mathcal(O)(1)\). Then \(T^(search)_n = n \cdot \mathcal(O)(1) = \mathcal(O)(n)\).

Analysis of the merge sort algorithm

The input data is divided into two parts, both of which are processed: \(a = 2, b = 2, n^(\log_b a) = n\).

When processing a list, partitioning may require \(\Theta(n)\) operations, and for an array, it is performed in constant time (\(\Theta(1)\)). However, \(\Theta(n)\) will be used to join the results anyway, so \(f_n = n\).

The second case of the theorem is used: \(T^(mergeSort)_n = \Theta(n^(\log_b a) \cdot \log n) = \Theta(n \cdot \log n)\).

Analysis of the complexity of quick sort

At best, the original array is divided into two parts, each containing half of the original data. The division will require n operations. The complexity of linking the result depends on the data structures used - for the array \(\mathcal(O)(n)\), for the linked list \(\mathcal(O)(1)\). \(a = 2, b = 2, f_n = b\), so the complexity of the algorithm will be the same as merge sort: \(T^(quickSort)_n = \mathcal(O)(n \cdot \log n)\ ).

However, in the worst case, the minimum or maximum element of the array will always be selected as the reference. Then \(b = 1\), which means that again we cannot use the main theorem. However, we know that in this case n recursive calls will be executed, each of which performs the division of the array into parts (\(\mathcal(O)(n)\)) - which means the complexity of the algorithm \(T^(quickSort)_n = \mathcal(O)(n^2)\).

When analyzing quicksort by substitution, one would also have to consider separately the best and worst cases.

Tail recursion and loop

The analysis of the complexity of recursive functions is much more difficult than the similar evaluation of cycles, but the main reason why cycles are preferable is the high cost of calling the function.

After the call control is transferred another function. To transfer control, it is enough to change the value of the program counter register, in which the processor stores the number of the currently executing instruction - in the same way, control is transferred to the branches of the algorithm, for example, when using a conditional operator. However, a call is not only a transfer of control, because after the called function has completed its calculations, it must take back control to the point where the call was made, as well as restore the values ​​of local variables that existed there before the call.

To implement this behavior, a stack is used (call stack, call stack) - it contains the command number to return and information about local variables. The stack is not infinite, so recursive algorithms can lead to its overflow, in any case, it can take a significant part of the time to work with it.

In some cases, it is quite easy to replace a recursive function with a cycle, for example, the ones discussed above. In some cases, a more creative approach is required, but most often such a replacement is possible. In addition, there is a special kind of recursion, when the recursive call is the last operation performed by the function. Obviously, in this case, the calling function will not change the result in any way, which means it does not make sense for it to return control. Such recursion is called tail- compilers automatically replace it with a loop.

Often making tail recursion helps accumulating parameter method, which consists in adding the function of an additional accumulator argument, in which the result is accumulated. The function performs calculations with the accumulator until the recursive call. A good example of this technique is the factorial function:
\(fact_n = n \cdot fact(n-1) \\
fact_3 = 3 \cdot fact_2 = 3 \cdot (2 \cdot fact_1) = 3\cdot (2 \cdot (1 \cdot fact_0)) = 6 \\
fact_n = factTail_(n, 1) \\
\\
factTail_(n, accumulator) = factTail(n-1, accumulator \cdot n)\\
factTail_(3, 1) = factTail_(2, 3) = factTail_(1, 6) = factTail_(0, 6) = 6
\)

As a more complex example, consider the function for calculating Fibonacci numbers. The main function calls a helper function that uses the accumulative parameter method, passing the initial value of the iterator and two accumulators (two previous Fibonacci numbers) as arguments.

Start; fibonacci(number) return fibonacci(number, 1, 1, 0) end start; fibonacci(number, iterator, fib1, fib2) if iterator == number return fib1 return fibonacci(number, iterator + 1, fib1 + fib2, fib1) end

A function with an accumulative parameter returns the accumulated result if the specified number of numbers has been calculated, otherwise it increments the counter, calculates a new Fibonacci number, and makes a recursive call. Optimizing compilers can detect that the result of a function call is passed unchanged to the function's output and replace it with a loop. This technique is especially relevant in functional and logical programming languages, because in them, the programmer cannot explicitly use cyclic constructs.

Literature

  1. Multithreaded Qt Server. Thread pool. Pattern Decorator [Electronic resource] - access mode: https://website/archives/1390. Date of access: 21.02.2015.
  2. Jason McColm Smith: Trans. from English. - M. : OOO I.D. Williams”, 2013. - 304 p.
  3. Skiena S. Algorithms. Development Guide.-2nd ed.: per. from English - St. Petersburg: BHV-Petersburg, 2011.-720s.: ill.
  4. Vasiliev V. S. Complexity analysis of algorithms. Examples [Electronic resource] - access mode: https://website/archives/1660. Date of access: 21.02.2015.
  5. A. Aho, J. Hopcroft, J. Ullman, Data Structures and Algorithms, M., Williams, 2007.
  6. Miller, R. Sequential and parallel algorithms: General approach / R. Miller, L. Boxer; per. from English. - M. : BINOM. Knowledge Laboratory, 2006. - 406 p.
  7. Sergievsky G.M. Functional and logical programming: textbook. allowance for students of higher education. textbook institutions / G.M. Sergievsky, N.G. Volchenkov. - M .: Publishing Center "Academy", 2010.- 320s.

From Latin recursio (return). In the general case, this is the name of the process of repeating elements in a "self-similar way".

A striking example of recursion is nesting dolls. Recursive definition: "Matryoshka is a detachable hollow wooden doll containing a smaller matryoshka inside." Here is such a recursion in Russian. And if it were not for the limit of the masters' capabilities, the ideal matryoshka would go deep into itself to the atomic level. And even deeper. It’s just that Lefty didn’t have a small scope of sufficient strength. The upper limit is also theoretically unlimited, but baobabs of a suitable size do not grow on our planet. In general, for technical reasons, the recursion must be finite.

In programming (as in mathematics), recursion is the process of a function calling itself (direct recursion), or a call from within function A to function B, which in turn contains a call to function A (indirect or mutual recursion). Of course, recursive calls must have a executable termination condition, otherwise such a program will “hang”, as in an infinite loop - but, unlike an infinite loop, with infinite recursion it will crash on the stack overflow.

Recursion example

The most boring example of recursion in mathematical programming is factorial calculation. We will not change the glorious traditions. For those who haven't passed yet: N! (factorial N) is the product of all natural numbers from one to N (the factorial of zero is 1).
You can stupidly multiply numbers from 1 to N in a loop. And you can build a factorial(n) function that will contain a condition and a call to itself. If n is equal to one, then the function returns the value 1, otherwise it returns the value of n multiplied by factorial(n-1).
Sketching in PHP

Function factorial($n) ( if ($n == 1) ( return 1; ) else ( return intval($n * factorial($n - 1)); ) )

Practical applications of recursion

“Well, why is it needed here?” - an impatient young reader will ask us - “Scientific nonsense, tediousness, all sorts of factorials ... But in practice, what should this recursion be applied to?”
"To the black eye of web programming" - we will answer without hesitation. And we'll justify it right away.

In fact, there are many more applications of recursion in web programming than it seems. Because recursion is, perhaps, the only way to bypass any tree structure, when neither its dimensions nor nesting depth are known in advance. By the way, the construction and traversal of graphs will also not do without it. This is a classic, gentlemen - try to search for the necessary files in the Unix directory tree in some other way, and it will immediately become clear to you that without recursion - nowhere.

Try to do without it by building a sitemap with a hierarchical structure of sections in the form of nested lists. You'd rather hang yourself than build it if you don't know in advance exactly how many levels the nesting depth is limited to. And even if you know, but try to do without recursion, then instead of a simple, transparent and trouble-free function, build a bulky software “whatnot on crutches”. And when you're done and wipe your sweaty forehead, the grim truth of life will come to you: when you change the nesting depth, your sprawling structure will instantly stop working correctly. Therefore, you are unlikely to be able to apply it somewhere else.

Recursion in search engines

Yes exactly. Search engines also have nowhere to go from recursion. Ever since it was customary to measure the authority of a site (document) by the number of links, search engines have fallen into a recursive trap, and let them wander in it forever (this is the sincere good wish of the author). The link "weight" of the site is made up of small pieces of "weight" from all those that link to it. To calculate this weight for A, which is referred to by B, C and D, it is necessary to calculate their weight, which in turn is transmitted by all sorts of others, the weight of which also needs to be calculated ... and so on throughout the entire Network taken into account in the search engine. Completely recursive. And you say - solid theory. The most real practice ever.

Recursive PageRank by Google

The creators of Google published their basic algorithm for calculating PageRank a long time ago. And no matter how much it has changed since then, no matter how much it has been supplemented with improvements, the foundation remains the same. There's no way to know how much PageRank Page B is linking to Page A until we've calculated what PageRank Page B has received from all the other pages that have linked to it, and we can't know until we've calculated the PageRank of those pages... continue? Probably no longer needed. It's She again - Her Majesty recursion .

Encyclopedic YouTube

  • 1 / 5

    In mathematics, recursion refers to the method of defining functions and number series: recursively given a function determines its value by referring to itself with other arguments. In this case, two options are possible:

    e = 2 + 2 2 + 3 3 + 4 4 + … = 2 + f (2) (\displaystyle e=2+(\cfrac (2)(2+(\cfrac (3)(3+(\cfrac ( 4)(4+\ldots))))))\;=2+f(2)), where f (n) = n n + f (n + 1) (\displaystyle f(n)=(\cfrac (n)(n+f(n+1)))) Direct calculation according to the above formula will cause infinite recursion, but it can be proved that the value of f (n) tends to unity as n increases (therefore, despite the infinity of the series, the value of the Euler number is finite). For an approximate calculation of the value of e, it is enough to artificially limit the recursion depth to some predetermined number and, when it is reached, use it instead of f (n) (\displaystyle f(n)) unit.

    Another example of recursion in mathematics is the number series given by the recursive formula , where each next term of the series is calculated as the result of a function of n previous terms. Thus, with the help of a final expression (which is a combination of a recursive formula and a set of values ​​for the first n terms of a series), an infinite series can be defined.

    Struct element_of_list ( element_of_list * next ; /* pointer to the next element of the same type */ intdata; /* some data */ );

    Since an infinite number of nested objects cannot fit into finite memory, in reality such recursively defined structures are always finite. In the final (terminal) cells, empty pointers are usually written, which are also flags that indicate to the program processing the structure that its end has been reached.

    Recursion is a fairly common phenomenon that occurs not only in the fields of science, but also in everyday life. For example, the Droste effect, the Sierpinski triangle, etc. The easiest way to see recursion is to point the Web-camera at the computer monitor screen, of course, after turning it on. Thus, the camera will record an image of the computer screen, and display it on this screen, something like a closed loop will turn out. As a result, we will observe something similar to a tunnel.

    In programming, recursion is closely related to functions, more precisely, thanks to functions in programming, there is such a thing as recursion or a recursive function. In simple words, recursion is the definition of a part of a function (method) through itself, that is, it is a function that calls itself, directly (in its body) or indirectly (through another function). Typical recursive tasks are: finding n!, Fibonacci numbers. We have already solved such problems, but using cycles, that is, iteratively. Generally speaking, everything that is solved iteratively can be solved recursively, that is, using a recursive function. The whole solution comes down to solving the main or, as it is also called, the base case. There is such a thing as a recursion step or a recursive call. In the case when a recursive function is called to solve a complex problem (not the base case), a number of recursive calls or steps are performed in order to reduce the problem to a simpler one. And so on until we get the basic solution. Let's develop a program that declares a recursive function that calculates n!

    "stdafx.h" #include << "Enter n!: "; cin >>n; cout<< n << "!" << "=" << factorial(n) << endl; // вызов рекурсивной функции system("pause"); return 0; } unsigned long int factorial(unsigned long int f) // рекурсивная функция для нахождения n! { if (f == 1 || f == 0) // базовое или частное решение return 1; // все мы знаем, что 1!=1 и 0!=1 cout << "Step\t" << i << endl; i++; // операция инкремента шага рекурсивных вызовов cout << "Result= " << result << endl; result = f * factorial(f - 1); // функция вызывает саму себя, причём её аргумент уже на 1 меньше return result; }

    // code Code::Blocks

    // Dev-C++ code

    // factorial.cpp: defines the entry point for the console application. #include using namespace std; unsigned long int factorial(unsigned long int);// recursive function prototype int i = 1; // initialization of a global variable for counting the number of recursive calls unsigned long int result; // global variable to store the result returned by the recursive function int main(int argc, char* argv) ( int n; // local variable to pass the entered number from the keyboard cout<< "Enter n!: "; cin >>n; cout<< n << "!" << "=" << factorial(n) << endl; // вызов рекурсивной функции return 0; } unsigned long int factorial(unsigned long int f) // рекурсивная функция для нахождения n! { if (f == 1 || f == 0) // базовое или частное решение return 1; // все мы знаем, что 1!=1 и 0!=1 cout << "Step\t" << i << endl; i++; // операция инкремента шага рекурсивных вызовов cout << "Result= " << result << endl; result = f * factorial(f - 1); // функция вызывает саму себя, причём её аргумент уже на 1 меньше return result; }

    AT lines 7, 9, 21 the data type is declared unsigned long int , since the value of the factorial increases very quickly, for example already 10! = 3 628 800. If the size of the data type is not enough, then as a result we will get a completely wrong value. The code declares more operators than needed to find n!. This is done so that, having worked out, the program will show what happens at each step of recursive calls. Pay attention to the highlighted lines of code, lines 23, 24, 28 is the recursive solution of n!. Lines 23, 24 are the basic solution of a recursive function, that is, as soon as the value in the variable f will be either 1 or 0 (since we know that 1!=1 and 0!=1), the recursive calls will stop, and the values ​​will start returning, for each recursive call. When the value for the first recursive call is returned, the program will return the value of the computed factorial. AT line 28 the factorial() function calls itself, but its argument is already one less. The argument is reduced each time to reach a particular solution. The result of the program (see Figure 1).

    Enter n!: 5 Step 1 Result= 0 Step 2 Result= 0 Step 3 Result= 0 Step 4 Result= 0 5!=120

    Figure 1 - Recursion in C++

    According to the result of the program, each step is clearly visible and the result at each step is zero, except for the last recursive call. It was necessary to calculate five factorial. The program made four recursive calls, on the fifth call the base case was found. And once the program got the solution to the base case, it solved the previous steps and outputted the overall result. Figure 1 shows only four steps because a particular solution was found at the fifth step, which eventually returned the final solution, i.e. 120. Figure 2 shows the recursive calculation scheme 5!. The scheme clearly shows that the first result is returned when a particular solution is reached, but not immediately, after each recursive call.

    Figure 2 - Recursion in C++

    So to find 5! need to know 4! and multiply it by 5; 4! = 4 * 3! etc. According to the scheme shown in Figure 2, the calculation will be reduced to finding a special case, that is, 1!, after which the values ​​\u200b\u200bare returned to each recursive call in turn. The last recursive call will return the value 5!.

    Let's rewrite the program for finding the factorial in such a way as to obtain a table of factorials. To do this, we will declare a for loop in which we will call a recursive function.

    using namespace std; unsigned long int factorial(unsigned long int);// recursive function prototype int i = 1; // initialization of a global variable for counting the number of recursive calls unsigned long int result; // global variable to store the result returned by the recursive function int main(int argc, char* argv) ( int n; // local variable to pass the entered number from the keyboard cout<< "Enter n!: "; cin >>n; for (int k = 1; k<= n; k++) { cout << k << "!" << "=" << factorial(k) << endl; // вызов рекурсивной функции } system("pause"); return 0; } unsigned long int factorial(unsigned long int f) // рекурсивная функция для нахождения n! { if (f == 1 || f == 0) // базовое или частное решение return 1; // все мы знаем, что 1!=1 и 0!=1 //cout << "Step\t"<< i <>n; for (int k = 1; k<= n; k++) { cout << k << "!" << "=" << factorial(k) << endl; // вызов рекурсивной функции } return 0; } unsigned long int factorial(unsigned long int f) // рекурсивная функция для нахождения n! { if (f == 1 || f == 0) // базовое или частное решение return 1; // все мы знаем, что 1!=1 и 0!=1 //cout << "Step\t"<< i < << "Enter number from the Fibonacci series: "; cin >> <= entered_number; counter++) cout << setw(2) <

    // code Code::Blocks

    // Dev-C++ code

    // fibonacci.cpp: defines the entry point for the console application. #include // library for formatting output information on the screen #include using namespace std; unsigned long fibonacci(unsigned long);// prototype of the recursive function for finding numbers from the Fibonacci series int main(int argc, char* argv) ( unsigned long entered_number; cout<< "Enter number from the Fibonacci series: "; cin >> entered_number; for (int counter = 1; counter<= entered_number; counter++) cout << setw(2) <

    Suppose it is necessary to move three disks from the first rod to the third one, which means the second rod is auxiliary. The visual solution to this problem is implemented in Flash. Click the start button to start the animation, the stop button to stop it.

    The program must be written for the nth number of disks. Since we are solving this problem recursively, we first need to find particular cases of the solution. In this problem, there is only one special case - this is when only one disk needs to be moved, and in this case even an auxiliary rod is not needed, but we simply do not pay attention to this. Now it is necessary to organize a recursive solution, in case the number of disks is more than one. Let's introduce some notation in order not to write too much:

    <Б> - the rod on which the disks are initially located (base rod);
    <П> - auxiliary or intermediate rod;
    <Ф> — final pivot – the pivot to which the disks must be moved.

    Further, when describing the algorithm for solving the problem, we will use these notations. To move three disks from <Б> on <Ф> we need to first move the two disks from <Б> on <П> and then move the third disk (the largest one) to <Ф> , because <Ф> free.

    To move n discs from <Б> on <Ф> we need to move first n-1 discs from <Б> on <П> and then move the nth disk (the largest one) to <Ф> , because <Ф> free. After that you need to move n-1 discs from <П> on <Ф> , while using the rod <Б> as a helper. These three actions are the whole recursive algorithm. The same algorithm in pseudocode:
    n-1 move to <П>
    n move to <Ф>
    n-1 move from <П> on <Ф> , while using <Б> as an auxiliary

    // hanoi_tower.cpp: defines the entry point for the console application. // A program that recursively solves the Tower of Hanoi problem #include "stdafx.h" #include #include using namespace std; void tower(int, int, int, int); // declaration of the prototype of the recursive function int count = 1; // global variable for counting the number of steps int _tmain(int argc, _TCHAR* argv) ( cout<< "Enter of numbers of disks: ";// введите количество дисков, которые надо переместить int number; cin >>number; cout<< "Enter the number of basic rod: "; // введите номер стержня, на котором диски будут находится изначально int basic_rod; cin >> basic_rod; cout<< "Enter the number of final rod: "; // введите номер стержня, на который необходимо переместить диски int final_rod; cin >> final_rod; int help_rod; // block for determining the number of the auxiliary rod, analyzing the numbers of the initial and final rod if (basic_rod != 2 && final_rod != 2) help_rod = 2; else if (basic_rod != 1 && final_rod != 1) help_rod = 1; else if (basic_rod != 3 && final_rod != 3) help_rod = 3; tower(// launch the recursive function for solving the Towers of Hanoi problem number, // variable that stores the number of disks to move basic_rod, // variable that stores the number of the rod on which the disks will initially be located help_rod , // variable that stores the rod number, which is used as an auxiliary final_rod); // variable that stores the number of the pivot to which disks should be moved system("pause"); return 0; ) void tower(int count_disk, int baza, int help_baza, int new_baza) ( if (count_disk == 1) // termination condition for recursive calls ( cout<< setw(2) << count << ") "<< baza << " " << "->" << " " << new_baza << endl; count++; } else { tower(count_disk -1, baza, new_baza, help_baza); // перемещаем все диски кроме самого последнего на вспомогательный стержень tower(1, baza, help_baza, new_baza); // перемещаем последний диск с начального стержня на финальный стержень tower(count_disk -1, help_baza, baza, new_baza); // перемещаем все диски со вспомогательного стержня на финальный } }

    // code Code::Blocks

    // Dev-C++ code

    // hanoi_tower.cpp: defines the entry point for the console application. // A program that recursively solves the "Tower of Hanoi" problem #include #include using namespace std; void tower(int, int, int, int); // declaration of the prototype of the recursive function int count = 1; // global variable for counting the number of steps int main() ( cout<< "Enter of numbers of disks: ";// введите количество дисков, которые надо переместить int number; cin >>number; cout<< "Enter the number of basic rod: "; // введите номер стержня, на котором диски будут находится изначально int basic_rod; cin >> basic_rod; cout<< "Enter the number of final rod: "; // введите номер стержня, на который необходимо переместить диски int final_rod; cin >> final_rod; int help_rod; // block for determining the number of the auxiliary rod, analyzing the numbers of the initial and final rod if (basic_rod != 2 && final_rod != 2) help_rod = 2; else if (basic_rod != 1 && final_rod != 1) help_rod = 1; else if (basic_rod != 3 && final_rod != 3) help_rod = 3; tower(// launch the recursive function for solving the Towers of Hanoi problem number, // variable that stores the number of disks to move basic_rod, // variable that stores the number of the rod on which the disks will initially be located help_rod , // variable that stores the rod number, which is used as an auxiliary final_rod); // variable that stores the number of the rod to which the disks must be moved return 0; ) void tower(int count_disk, int baza, int help_baza, int new_baza) ( if (count_disk == 1) // termination condition for recursive calls ( cout<< setw(2) << count << ") "<< baza << " " << "->" << " " << new_baza << endl; count++; } else { tower(count_disk -1, baza, new_baza, help_baza); // перемещаем все диски кроме самого последнего на вспомогательный стержень tower(1, baza, help_baza, new_baza); // перемещаем последний диск с начального стержня на финальный стержень tower(count_disk -1, help_baza, baza, new_baza); // перемещаем все диски со вспомогательного стержня на финальный } }

    Figure 5 shows an example of the work of the recursive program Tower of Hanoi. First, we entered the number of disks equal to three, then we entered the base rod (first), and designated the final rod (third). Automatically the second rod became auxiliary. The program produced such a result that it completely coincides with the animation solution of this problem.

    Enter of numbers of disks: 3 Enter the number of basic rod: 1 Enter the number of final rod: 3 1) 1 -> 3 2) 1 -> 2 3) 3 -> 2 4) 1 -> 3 5) 2 -> 1 6) 2 -> 3 7) 1 -> 3

    Figure 5 - Recursion in C++

    It can be seen from the figure that the disk first moves from rod one to rod three, then from rod one to rod two, from rod three to rodtwo, and so on. That is, the program only gives the sequence of moving disks and the minimum number of steps for which all disks will be moved.

    All these problems could be solved iteratively. The question arises: “Which is better to solve, iteratively or recursively?”. I answer: “The disadvantage of recursion is that it consumes much more computer resources than iteration. This is expressed in a large load, both on RAM and on the processor. If the solution of a particular problem in an iterative way is obvious, then it must be used differently, use recursion!” Depending on the problem being solved, the complexity of writing programs changes when using one or another solution method. But more often, the problem solved by the recursive method in terms of code readability is much clearer and shorter.

    Recursions are interesting events in their own right, but in programming they are of particular importance in certain cases. When faced with them for the first time, a fairly significant number of people have problems understanding them. This is due to the huge field of potential application of the term itself, depending on the context in which "recursion" is used. But it is hoped that this article will help to avoid possible misunderstanding or misunderstanding.

    What is "recursion" anyway?

    The word "recursion" has a whole range of meanings, which depend on the area in which it is used. The universal designation is this: recursions are definitions, images, descriptions of objects or processes in the objects themselves. They are possible only in those cases when the object is a part of itself. Mathematics, physics, programming and a number of other scientific disciplines define recursion in their own way. She found practical application in the work of information systems and physical experiments.

    What is meant by recursion in programming?

    Recursive situations, or recursion in programming, are the moments when a procedure or program function calls itself. As strange as it may sound to those who have started learning programming, there is nothing strange here. It should be remembered that recursions are not difficult, and in some cases they replace loops. If the computer is correctly given a call to a procedure or function, it will simply start executing it.

    Recursion can be finite or infinite. In order for the first one to stop calling itself, it must also contain termination conditions. This can be a decrease in the value of a variable and, when a certain value is reached, stop the call and end the program / move to the next code, depending on the needs to achieve certain goals. By infinite recursion is meant that it will be called as long as the computer or program in which it runs is running.

    It is also possible to organize a complex recursion with the help of two functions. Suppose there are A and B. Function A has a call to B in its code, and B, in turn, tells the computer to execute A. Complex recursions are a way out of a number of complex logical situations for computer logic.

    If the reader of these lines has studied program loops, then he probably already noticed the similarity between them and recursion. In general, they can indeed perform similar or identical tasks. With the help of recursion, it is convenient to simulate the work of a loop. This is especially useful where the loops themselves are not very convenient to use. The scheme of software implementation does not differ much between different high-level programming languages. But still, recursion in "Pascal" and recursion in C or another language have their own characteristics. It can be successfully implemented in low-level languages ​​like "Assembler", but this is more problematic and time consuming.

    recursion trees

    What is a "tree" in programming? This is a finite set, consisting of at least one node, which:

    1. It has an initial special node, which is called the root of the whole tree.
    2. The remaining nodes are in a non-zero number of pairwise disjoint subsets, and they are also a tree. All such forms of organization are called subtrees of the main tree.

    In other words: trees contain subtrees that contain more trees, but in fewer numbers than the previous tree. This continues until one of the nodes has no way to move further, and this will mark the end of the recursion. There is one more nuance about the schematic representation: ordinary trees grow from the bottom up, but in programming they are drawn the other way around. Nodes that do not have extensions are called leaf nodes. For ease of designation and for convenience, genealogical terminology (ancestors, children) is used.

    Why is it used in programming?

    Recursion in programming has found its application in solving a number of complex problems. If only one call needs to be made, then it is easier to use an integration loop, but with two or more repetitions, in order to avoid building a chain and make them execute in the form of a tree, recursive situations are used. For a wide class of tasks, the organization of the computational process in this way is the most optimal in terms of resource consumption. So, recursion in Pascal or any other high-level programming language is a function or procedure call before the conditions are met, regardless of the number of external calls. In other words, there can be only one call to the subroutine in the program, but it will occur up to a predetermined moment. In some way, this is an analogue of a cycle with its own specifics of use.

    Differences between recursion in different programming languages

    Despite the general implementation scheme and the specific application in each individual case, recursion in programming has its own characteristics. This can lead to difficulty during the search for the necessary material. But you should always remember: if a programming language calls functions or procedures, then calling recursion is a feasible thing. But its most significant differences appear when using low and high programming languages. This is especially true of the possibilities of software implementation. The execution ultimately depends on what task is set, and recursion is written in accordance with it. Functions and procedures are used differently, but their goal is always the same - to force themselves to be called.

    Recursion is easy. How easy is it to remember the content of an article?

    For beginners, it can be difficult to understand at first, so examples of recursion or at least one are needed. Therefore, a small example from everyday life should be given, which will help to understand the very essence of this mechanism for achieving goals in programming. Take two or more mirrors, place them so that all the others are displayed in one. You can see that the mirrors reflect themselves multiple times, creating an infinity effect. Here recursions are, figuratively speaking, reflections (there will be many of them). As you can see, it is not difficult to understand, there would be a desire. And by studying programming materials, you can further understand that recursion is also a very easy task.

Top Related Articles