Hello, everyone. My name is Fillip Rukhovich. In this first module of our curse, we will first talk about how to measure efficiency of the developed algorithm. Second, learn what assume tactics is an third. Learn how to implement some simplest algorithms. So let's begin. In competitive programming, the goal of the participants, it's to solve the problem. It means that the participants should write a program. This program reached from a console or input file some data in specific format, then the program perform some calculations and breeds the answer on the screen or in the output file. A classical example of such a problem is a numbers sorting problem. In this problem, a positive integer N is printed into the input. In other words, it's printed on the screen by using and then an integer numbers follow. The program should read this n plus 1 numbers. Perform some actions and then print given numbers in an increasing or to be more precise in it non-decreasing order. However, each competitive programming problem contains such a thing as a time limit time. One of the popular variance is two seconds. It means that the program should stop its work for none longer than two seconds. Otherwise, the program will get time limit exceeded message from the jury and it's not very hard to achieve. Because the computer may perform only about 100 million operations per second. So it's not enough just to come up with the algorithm and implement it. Where to do it effectively? But what do you think? What the precise time of work or run of the algorithm depends on? Of course, the main factor is the precise number of basic actions being performed by the program. In other words, the number of reads, assignments, additions, subtractions, multiplications, divisions, operations with memory etc., etc., etc. It seems that we can just measure a number of section actions, but there occurs some number of difficulties. Namely, the precise number of perform actions significantly depends on the input data. Obviously, any reasonable algorithm sorts 5 elements faster than 5 millions of elements, but even different arrays of equal length may be sorted by the algorithm. Different number of actions. It's a popular situations that, for example, if the given data is sorted or almost sorted, then the algorithm understands it at the finishes work quickly. But in case of more nontrivial data, the same algorithm works really slower. It means that the number of actions is a function of not only a number n or the size of data, but also of the data itself. It's a really big number of parameters, which is almost impossible to consider. Is the program has been written on language of higher level, such as C++ or Pascal? Then the program is compiled into a program on the assembler language. This new program then will be compiled into the language of machine cost language. It means that in reality, the computer will perform not the same actions as described in the initial program. Also, we must note that the same program is being compiled by two different compiles can be transformed into two programs with very different speed. And of course, don't forget about the speed of hardware itself. The program is launched on. First, not the precise number of actions is used, but an upper bound for these numbers. This upper bound is a function of relatively small numbers of parameters. For example, in case of sorting, number n may be the only parameter. In future, we consider another variance. Second, the efficiency of the function is measured up to a multiplicative constant. In other words, functions f of n is equal to 5 is multiple by 2n degree and g of n is equal to 28 is multiple by 2 in degree are not different for us. And the reason is that the slowing down In constant number of times may be compensated by launching on, for example, faster hardware always stronger compiler settings. Or even by choosing another measure unit, for example, we can say that unit is a step in the assembler version of a program instead of, for example, one action like addition of numbers. But the most important thing we are interested in is the increasing order of our efficiency function. In other words, we compare the functions using the changing ratio, ratio of values all the time. For more precise analysis, let's consider the following formal definitions, similar to the definitions of limit by Koshy. These definitions may seem difficult, but we will consider a number of examples later, and you will see that the work with this notions is quite easy. So let F and G be functions of natural argument. In other words, F and G are number sequences. F is equal to big O of G if there exists a positive constant big C and a natural number and zero. That's for any natural number and which is not less than and zero, F of N doesn't exceed big C multiple by G of N. F is equal to big omega, G is there exists a positive constant small c, and natural number and zeros that for any natural number N which is not less than N zero's F of N is not less than small c multiple by G of N. Informally, the statement F is equal to big O of G means that F doesn't increase faster than G at the same time, the statement F is equal to big omega OG means that F doesn't increase slower. In fact, big O and big omega are similar to operators less or equal and more equal for the functions. There's no definition F is equal to theta of G means asymptotic equality, so that F and G increase with the same asymptotic speed. For better understanding, we consider some number of functions. First example F of N is equal to N, and the G of N is equal to N in square. Obviously F of N is always not more than G of N, so F of N is equal to O of G of N, and as a consequence G of terms equal to omega of F of N. Second example, F of N is equal to five multiple by N and G of N is equal to seven multiple by ten. According to the definition F of N is equal to theta of G of N which is equal to set of N because definitions have such a structure that asymptotics remains the same after multiplication by constant. F of N is equal to billion multiple by ten. In this case F of N is off and square because the ratio of N is square root, N is multiple by billion goes to infinity. Here we have a small paradox for relatively reasonable N, F of N significantly exceeds N square, but asymptotically F of N is less than N square because of faster increasing of N square. More formally, for any positive constant C1 and C2 function, C2 multiple by G of N starting from some N will be more than C1 multiple by F of N forever. For example, F of N is equal to eight multiple by ten squared, plus 36 is multiplied by N + 4360. In this case, F of N is equal to all fairness squared because both 36 N and 4360 are less than N square. It means that after some N, F of N will not be more than ten multiple by N square because of this reason, big O break it's contain only main added or dorm, which forms the rate of increasing. Then F of N is equal to logarithm of N to base A, and G of N is equal to logarithm of enter base B, for AB some constants which are more than one. Then F of N is equal to theta of G of N because the ratio of G of N, and F of Nis a logarithm to of A to the base B. In other words, some number which doesn't depend on. [INAUDIBLE] According to the fact that basis of logarithm are often omitted in asymptotics. For example, a form f(n) is equal to theta of logarithm n without base is quite common. f(n) is equal to case degree of natural logarithm of n for k, s, n arbitrary positive number. Then f(n) is equal to big O of n and f degree, but not big theta of n and s degree for any positive number l. For example, k may be equal to 1 billion while l is 1 divided by 1 billion. This fact can be proved using L'HOpital's rule. It's a rule from the first semester of university course of mathematical analysis. If you are not a student yet, then just believe that at. It's a paradoxical result, especially if we note that in case of l equal to 0, f(n) is equal to omega of n for else degree and grows faster. So we've seen asymptotics discontinuity over l. When l becomes 0, then in terms of asymptotics comparing, we see something like quantum leap. F(n) is equal to n in k degree for k, s, n arbitrary positive number. Then one can prove using the L'HOpital's that f(n) is equal to, for example, all of two intense degree. Or two ns degree multiple by n, or O of three in ended and degree. This big O are asymptotics of a wide class of brute force algorithms iterating over some objects. For almost any problem, there exists a simple brute force solution, but in most cases its efficiency is not enough. So we need to find an algorithm which works fast. In most cases, polynomial is faster. But how to estimate whether a developed algorithm is effective enough or not? In practice, one usually uses the following technique. First of all, and asymptotics estimation of the algorithm is calculated, it can be done before implementation. Why? Because asymptotics speech of increasing efficiency is, as one can see a characteristic of the algorithm itself which remains the same after changing. For example of the C language to the Pascal language. After that maximal possible arguments are acceptable by constraints in the problem statement are substituted into a function. And the hidden constant in big O is set to be one. Suppose it we get a number C as a result. This C is a rough estimation of a number of actions. As one can see from the practice, if a time limit is one or two seconds. Then a maximal C with which is a program has a chance to meet the time limits. It's about 10 in 8 degree or 100 million. If C is significantly less than 100 million, for example, 20 or 30 million 1 million, or even less, then the problem will meet the time limit almost surely. But if C is significantly more than 100 million, for example, 1 billion, a cup of a billion or more, then the program will not meet the time limit almost sure. If C occurs about 100 million or to 300 million, then the success depends on the program itself. Well, precisely of the hidden constant which occurs inside the algorithm and depends on why it class of small details. Particularly, on my Olympiad practice, there were algorithms with 3 or 400 millions actions with which worked fast. And also algorithm with 20 million actions which cause the vertex time limit exceeded. So we consider a common case. What's the problem of the competitive programming is, and how to estimate its efficiency? In the following video, we will consider examples of the simplest algorithms which are needed in the competitive programming. See you in next video.