How to set up smartphones and PCs. Informational portal
  • home
  • Programs
  • Software Development Approaches. Structural approach

Software Development Approaches. Structural approach

1.Coding

At the stage of software development, the following main actions are performed: coding; testing; development of a reference system for software; creating user documentation; creating a version and installing software,

Coding is the process of converting high-level and low-level design results into a finished software product. In other words, when coding, the description of the compiled PP model takes place by means of the selected programming language, which can be any of the existing languages. The choice of language is carried out either at the request of the customer, or taking into account the task being solved and the personal experience of the developers.

When coding, it is necessary to follow the standard for the chosen language, for example, for the C language it is ANSI C, and for C++ it is ISO/IEC 14882 "Standart for the C++ ProgrammingLanguage".

In addition to the generally accepted standard for a programming language, a company can also use its own additional requirements for the rules for writing programs. Basically, they relate to the rules for formatting the text of the program.

Following the company's standard and rules allows you to create a program that works correctly, is easy to read, understandable to other developers, contains information about the developer, date of creation, name and purpose, as well as the necessary data for configuration management.

At the coding stage, the programmer writes programs and tests them himself. Such testing is called unit testing. All issues related to software testing are discussed in Chap. 10, it also describes the testing technology that is used at the stage of software development. This technique is called testing. "glass box" (glassbox); sometimes also called testing "white box" (whitebox) as opposed to the classical concept of a "black box" (blackbox).

In black box testing, the program is treated as an object whose internal structure is unknown. The tester enters data and analyzes the result, but he does not know how the program works. When selecting tests, a specialist looks for input data and conditions that are interesting from his point of view, which can lead to non-standard results. First of all, he is interested in those representatives of each class of input data, in which the errors of the program under test are most likely to appear.

When testing the "glass box" the situation is completely different. The tester (in this case, the programmer himself) develops tests based on knowledge of the source code, to which he has full access. As a result, he receives the following benefits.

1. Direction of testing. The programmer can test the program in parts, develop special test subroutines that call the unit under test and pass the data of interest to the programmer. A single module is much easier to test exactly as a "glass box".

2.Full code coverage. The programmer can always determine which code fragments work in each test. It sees what other code branches are left untested, and can choose the conditions under which they will be tested. The following describes how to track the code coverage of the tests you run.

3. Ability to control the flow of commands. The programmer always knows which function should be executed next in the program and what its current state should be. To find out if a program is working the way it thinks, a programmer can include debugging commands that display information about the progress of its execution, or use a special tool called a debugger to do this. The debugger can do a lot of useful things: track and change the sequence of execution of program commands, show the contents of its variables and their addresses in memory, etc.

4. Ability to track data integrity. The programmer knows which part of the program must modify each data element. Tracking the state of the data (using the same debugger), he can identify errors such as data changes by the wrong modules, their incorrect interpretation or unsuccessful organization. The programmer can automate testing on his own.

5. Vision of internal boundary points. In the source code, those boundary points of the program that are hidden from outside view are visible. For example, several completely different algorithms can be used to perform a certain action, and without looking at the code, it is impossible to determine which one the programmer has chosen. Another typical example would be a buffer overflow problem used to temporarily store input data. The programmer can immediately tell at what amount of data the buffer will overflow, and he does not need to conduct thousands of tests.

6. Possibility of testing, determined by the selected algorithm. Special technologies may be needed to test data processing that uses very complex computational algorithms. Matrix transformation and data sorting are classic examples. A tester, unlike a programmer, needs to know exactly which algorithms are used, so one has to refer to specialized literature.

Glass box testing is part of the programming process. Programmers do this work all the time, they test each module after it is written, and then again after integrating it into the system.

When performing unit testing, you can use either structural or functional testing techniques, or both.

Structural Testing is one type of glass box testing. Its main idea is the correct choice of the software path under test. In contrast to him functional testing belongs to the category of "black box" testing. Each function of the program is tested by taking its input and analyzing the output. In this case, the internal structure of the program is taken into account very rarely.

While structural testing has a much more powerful theoretical foundation, most testers prefer functional testing. Structural testing lends itself better to mathematical modeling, but this does not mean at all that it is more efficient. Each of the technologies allows you to identify errors that are missed in the case of using the other. From this point of view, they can be called equally effective.

The object of testing can be not only the full path of the program (the sequence of commands that it executes from start to completion), but also its individual sections. It is absolutely unrealistic to test all possible ways of executing a program. Therefore, testing specialists single out from all possible paths those groups that need to be tested for sure. For selection, they use special criteria called coverage criteria (coveragecriteria), which determine a quite real (even if quite large) number of tests. These criteria are sometimes called logical coverage criteria, or completeness criteria.

3. Development of a help system for a software product. Creating User Documentation

It is advisable to appoint one of the project employees as the technical editor of the documentation. This employee can do other work, but his main task should be to analyze the documentation, even if other employees develop it.

It often happens that several people work on the creation of software, but none of them is fully responsible for its quality. As a result, software not only does not benefit from the fact that more people develop it, but also loses, since everyone subconsciously shifts responsibility to the other and expects his colleagues to do this or that part of the work. This problem is solved by the appointment of an editor who is fully responsible for the quality and accuracy of technical documentation.

The software help system is formed on the basis of the material developed for the user manual. Forms and creates it responsible for the performance of this work. It can be either a technical editor or one of the developers together with a technical editor.

A well-documented PP has the following advantages.

1. Ease of use. If the PP is well documented, then it is much easier to apply. Users learn it faster, make fewer mistakes, and as a result, do their job faster and more efficiently.

2. Lower cost of technical support. When the user cannot figure out how to perform the actions he needs, he calls the software manufacturer to the technical support service. The maintenance of such a service is very expensive. A good manual, on the other hand, helps users solve problems on their own and reduces the need to contact the technical support group.

3. High reliability. Incomprehensible or sloppy documentation makes software less reliable, as its users are more likely to make mistakes, it is difficult for them to figure out what causes them and how to deal with their consequences.

Ease of escort. A huge amount of money and time is spent on analyzing problems that are generated by user errors. Changes made in new software releases are often just a change in the interface of old functions. They are introduced so that users finally figure out how to use the software and stop calling the technical support service. Good leadership to a large extent

When considering software development technology, it is necessary to use a systematic approach, which involves considering not some individual aspects of the software development problem, but the problem as a whole. The system approach is implemented in space and time.

A systematic approach in time considers the sequence of stages of software creation from the moment of formation of an unmet need for software until its permission and support in the operation of the resulting software product.

System approach in "space" provides for the consideration of the software being developed as part of the system. At the same time, based on the study of the information needs of the system, which will include the software being developed, the goals and the set of software functions are formulated, and software prototypes are analyzed. Software requirements are formed and documented.

Modern software development technology considers programming as one of the development stages in the chain of successive stages of the development cycle. All these stages are united by the concept of the software life cycle and must be supported by appropriate software and hardware tools.

In accordance with the international standard ISO/IEC 12207 "information technology - Software life cycle processes", the software development process contains the following stages of the software life cycle:

1) analysis of system requirements and scope;

2) system architecture design;

3) analysis of software requirements (specifications, external interfaces,);

4) software architecture design;

5) detailed design of each unit of software;

6) software coding (programming)

7) testing of software units;

8) integration (unification of software) and testing of a set of software units;

9) software qualification tests (integrated testing);

10) system integration software structure units should be integrated with hardware units;

11) qualification tests of the system;

12) software installation.

Thus, the software development process starts from the system where this software will be used and ends again in the system.

After the development stages in the software life cycle, the stage of software operation and maintenance in operation follows. Sometimes the list of stages of the software life cycle is given with some generalizations (aggregations) of the 12 stages. For example, the stages of designing a system and determining software requirements, designing a software package, designing software algorithms, programming (coding), offline software debugging, complex software debugging, software operation.

Neglecting the stages of software design, the desire to immediately start programming without sufficient study of algorithms and issues of interaction of software structural units often leads to a chaotic software development process with little chance of success.

Spiral software life cycle model. "Heavy and light" (fast) software development technologies

The considered life cycle model (LC) belongs to the cascade type model. This type of life cycle model is good for software for which, at the very beginning of development, it is possible to fully and accurately formulate all the requirements for the software.

Scheme of a spiral software life cycle. However, the real process of creating software does not always fit into such a rigid scheme and often there is a need to return to the previous stages with clarification or revision of the decisions made.

For software, as well as for other complex systems, the initial requirements for which are not complete enough, an iterative development process is characteristic. At the same time, for some types of software, it is even desirable to move to the next stage as quickly as possible. At the same time, the shortcomings inevitable with such hasty work are eliminated at the next iteration or remain forever.

The main task is to achieve workable software as soon as possible, thereby activating the process of clarifying and supplementing requirements. This is the so-called spiral software life cycle model.

At each turn of the spiral, a product version is created, software requirements are specified, and the work of the next turn is planned. The spiral software life cycle model reflects the objectively existing process of iterative software development (Fig. 8.2).

It is believed that the spiral scheme of software life cycle is intended not so much for hasty developers, but for software, low-quality first versions of which are acceptable for the functional purpose of the software.

There is a direction of "Fast technologies" of software development (Agile Software Development), which gives an ideological justification for the views associated with the spiral model of the life cycle. These technologies are based on four ideas:

Interactive interaction of individuals is more important than formal procedures and tools,

Working software is more important than having documentation for it,

Cooperation with the customer is more important than formal contracts,

A quick response to external changes is more important than strict adherence to the plans.


Rice. 8.2 - Scheme of the spiral software life cycle

In other words, fast technologies are aimed at replacing formal and time-consuming documented interaction procedures during development with interactive ones, which is possible with small project sizes, selected qualities of employees, placement of developers and customers “in the same room” and for developing software for non-critical systems.

The correctness of these principles to a certain extent when software development is carried out by a small number of qualified and dedicated "fans") for the development of certain types of software is difficult to dispute. However, Agile technologies and their ideologues recognize this are applicable in software projects of a certain class and scale, just like the spiral life cycle model in general, namely, where software errors lead to some inconvenience or loss of recoverable funds.

Where incorrectly functioning software leads to a threat to human life or to large material losses, full-fledged well-thought-out technologies must be used to ensure the reliability of the software product.

With an increase in the scale of a software project - an increase in the number of people participating in it, the need for a rigid development technology that makes up the cascade software life cycle increases. Documentation is needed here, since at any moment the loss of any of the developers is possible, formalization of interprogram communications, software change management, etc. is necessary. It is not for nothing that the cascade life cycle model has been introduced into software development standards. At the same time, it also makes it possible to implement an iterative development process due to the provided phasing of the design of CTS and software for them.

For very large software projects (more than 100 developers), development technology is a key factor affecting not only the quality of software, but also the very possibility of creating it.

"Heavy and light" software development technologies . Developers of many types of software consider the waterfall life cycle model to be too regulated, too documented and heavy, and therefore irrational. There is a direction of "Fast technologies" (light technologies) of software development (Agile Software Development), which gives an ideological justification for these views. These technologies are based on four ideas:

1. interactive interaction of individuals is more important than formal procedures and tools,

2. working software is more important than the availability of documentation for it,

3. cooperation with the customer is more important than formal agreements with him,

4. quick response to external changes is more important than strict adherence to the plans.

The correctness of these principles, except for the third one, to a certain extent (software development is carried out by a small number of qualified programmers - "fans" who do not need control and additional motivation) for software development is difficult to dispute. However, Agile technologies and this is recognized by their ideologues are applicable to software projects of a certain class and scale, as well as the spiral life cycle model in general, namely where software errors lead to some inconvenience or loss of recoverable funds and where software requirements are constantly changing , since they were poorly defined in advance, and rapid adaptation to these changes is required.

Fast Technologies - attempts to reach a compromise between strict development discipline and its complete absence in the name of reducing the flow of paperwork accompanying development. Fast technologies cannot ensure high reliability of a software product precisely because of the minimization of paperwork that legally confirms the responsibility of the developer.

An example of Agile technologies is Extreme Programming (XP). Iterations in XP are very short and consist of four operations: coding, testing, listening to the customer, designing. The principles of XP - minimalism, simplicity, customer participation, short cycle times, close interaction between developers - they should sit in the same room, daily operational meetings together with the customer seem reasonable and have long been used not only in fast technologies, but in XP they are brought to extreme values.

An analysis of many software projects has shown that lightweight technologies that preach the principles of self-organization, emphasizing the use of individual abilities of developers, short development iterations in a spiral model, XP, SCRUM are common and often also lead to success, making the most of the features of working in small teams.

Where incorrectly working software leads to a threat to human life or to large material losses, orderly, fully thought-out and predictable formalized "heavy" technologies should be used to ensure the reliability of the software product even in the case of medium-skilled developers. With an increase in the scale of a software project, an increase in the number of participants in it people, the need for a rigid and formal development technology that fixes the responsibility of each development participant that makes up the cascade software life cycle is increasing. It is not for nothing that the waterfall life cycle model has been introduced into software development standards.

In large development teams, the problem of management comes to the fore.

For very large software projects, the issues of orderly coordinated development: structuring, integrating, ensuring that programs interact correctly, organizing the correct and coordinated implementation of inevitable changes are key. and affect the very possibility of their creation.

In small software projects, algorithmic refinements, the influence of individual talented individuals play a decisive role, while in large projects these factors are leveled and do not have a decisive influence on the development process.

Software developers who have average capabilities, and they are in the majority, and who maintain technological discipline within the right technology, should develop software of the required quality. "Keep order and he will keep you."

Annotation: A flexible approach to software development, the basic principles of flexible development are considered. A list of techniques is provided that, to a certain extent, correspond to the principles of flexible software development. The key values ​​and principles of agile development are analyzed.

You can download the presentation for this lecture.

Purpose of the lecture:

Gain an understanding of the purpose and basic principles of agile software development.

Introduction

Agile software development methodology focused on the use of an iterative approach, in which software is created gradually, in small steps, including the implementation of a certain set of requirements. It is assumed that the requirements may change. Teams using agile methodologies are formed from versatile developers who perform various tasks in the process of creating a software product.

When using agile methodologies, risk minimization is carried out by reducing development to a series of short cycles called iterations, lasting 2-3 weeks. An iteration is a set of tasks scheduled to be completed in a specific period of time. In each iteration, a workable version of the software system is created, in which the most priority (for this iteration) customer requirements. Each iteration performs all the tasks necessary to create working software: planning, requirements analysis, design, coding, testing, and documentation. While a single iteration is generally not enough to release a new version of a product, it is understood that the current software ready for release at the end of each iteration. At the end of each iteration, the team re-prioritizes the requirements for the software product, possibly making adjustments to the system development.

Principles and meaning of agile development

For the agile development methodology, key postulates are declared that allow teams to achieve high performance:

  • people and their interaction;
  • delivery of working software;
  • cooperation with the customer;
  • response to change.

People and interaction. People are the most important part of success. Individual team members and good communications are essential for high-performing teams. To facilitate communication, agile practices involve frequent discussions of work results and changes to decisions. Discussions can be held daily for a few minutes and at the end of each iteration with an analysis of the results of the work and a retrospective. To communicate effectively in meetings, team members must adhere to the following key rules of conduct:

  • respect for the opinion of each team member;
  • be truthful in any communication;
  • transparency of all data, actions and decisions;
  • confidence that each participant will support the team;
  • commitment to the team and its goals.

In addition to an effective team and good communications, perfect software tools are needed to create high-performance teams in agile methodologies.

Working software is more important than comprehensive documentation. All agile methodologies highlight the need to deliver small pieces of working software to the customer at predetermined intervals. Software, as a rule, must pass the level of unit testing, testing at the system level. The amount of documentation should be kept to a minimum. During the design process, the team should keep up to date a short document containing the rationale for the decision and a description of the structure.

Cooperation with the customer is more important than formal agreements under the contract. In order for the project to be successfully completed, regular and frequent communication with the customer is necessary. The customer must regularly participate in the discussion of the decisions made on the software, express their wishes and comments. Involving the customer in the software development process is necessary to create a quality product.

Responding quickly to change is more important than following a plan. The ability to respond to change largely determines the success of a software project. In the process of creating a software product, they often change customer requirements. Customers often don't know exactly what they want until they see it working. software. Agile methodologies look for feedback from customers in the process of creating a software product. Responsiveness to change is essential to create a product that delivers customer satisfaction and business value.

The tenets of agile development are supported by 12 principles. Agile specific methodologies define processes and rules that more or less conform to these principles. Flexible methodologies for creating software products are based on the following principles:

  1. The highest priority is to satisfy the customer's wishes through the delivery of useful software in a short time, followed by continuous updates. Agile practices include fast initial release and frequent updates. The goal of the team is to deliver a working version within a few weeks of starting the project. Going forward, software systems with incremental functionality should ship every few weeks. The customer can start commercial operation of the system if he considers it functional enough. Also, the customer can simply get acquainted with the current version of the software, provide feedback with comments.
  2. Don't ignore changing requirements, even late in development. Flexible processes allow changes to be taken into account to ensure the competitive advantage of the customer. Teams using agile methodologies strive to make the program structure of high quality, with minimal impact of changes on the system as a whole.
  3. Deliver new working versions of the software frequently, at intervals of one week to two months, with a preference for shorter deadlines. At the same time, the goal is to deliver a program that meets the needs of the user, with a minimum of accompanying documentation.
  4. Customers and developers must work together throughout the project. It is believed that for a successful project, customers, developers, and all stakeholders must communicate often and in many ways to purposefully improve the software product.
  5. Projects should be implemented by motivated people. Give the project team a healthy working environment, provide the support they need, and trust that the team members will get the job done.
  6. The most effective and productive method of conveying information to the development team and exchanging opinions within it is a face-to-face conversation. In agile projects, the main mode of communication is simple human interaction. Written documents are created and updated incrementally as the software is developed and only when necessary.
  7. A working program is the main indicator of project progress. The approach of an agile project to completion is judged by how well the current program meets the customer's requirements.
  8. Agile processes encourage long-term development. Customers, developers and users must be able to maintain a constant pace indefinitely.
  9. A relentless focus on engineering excellence and quality design enhances the returns on agile technologies. Agile team members strive to create quality code by regularly refactoring.
  10. Simplicity is the art of achieving more by doing less. Team members solve current tasks as simply and efficiently as possible. If there is a problem in the future, then it is possible to make changes to the quality code at no great cost.
  11. The best architectures, requirements, and designs come from self-organizing teams. In flexible teams, tasks are assigned not to individual members, but to the team as a whole. The team itself decides how best to implement the requirements of the customer. Team members work collaboratively on all aspects of the project. Each participant is allowed to contribute to the common cause. No team member is solely responsible for the architecture, requirements, or tests.
  12. The team should regularly think about how to become even more effective, and then adjust and fine-tune their behavior accordingly. An agile team constantly adjusts its organization, rules, agreements, and relationships.

The above principles, to a certain extent, correspond to a number of software development methodologies:

Agile Modeling a set of concepts, principles and techniques (practices) that allow you to quickly and easily perform modeling and documentation in software development projects;
Agile Unified Process(AUP) a simplified version of the IBM RationalUnifiedProcess(RUP), which describes a simple and understandable approximation (model) for building software for business applications;
OpenUP it is an iterative-incremental method of software development. Positioned as a lightweight and flexible RUP option;
AgileDataMethod a group of iterative software development methods in which requirements and solutions are achieved through the collaboration of different cross-functional teams;
DSDM a methodology for developing dynamic systems based on the concept of rapid application development (RapidApplicationDevelopment, RAD). Represents an iterative and incremental approach that emphasizes continued user/consumer involvement in the process;
Extreme programming (XP) extreme programming;
Adaptive software development (ADD) adaptive software development;
Feature driven development (FDD) development focused on the gradual addition of functionality;
Getting Real an iterative approach without functional specifications used for web applications;
MSFfogAgileSoftwareDevelopment Agile software development methodology from Microsoft;
Scrum establishes rules for managing the development process and allows you to use existing coding practices by adjusting requirements or making tactical changes [

Waterfall model Requirements analysis Design Implementation Integration Testing Product specification drafting Product architecture drafting Source code development Integrating source code parts Testing and troubleshooting












The Unified Software Development Process (USDP) Use Case Model describes the cases in which an application will be used. The analytical model describes the base classes for the application. The design model describes the connections and relationships between classes and dedicated objects. The deployment model describes the distribution of software across computers. The implementation model describes the internal organization of the program code. The test model consists of test components, test procedures, and different test cases.








Typical software product architecture components and typical software requirements Program organization Main system classes Data organization Business rules User interface Resource management Security Performance Scalability Interaction with other systems (integration) Internationalization, localization Input-output data Error handling


Typical Software Product Architecture Components and Typical Software Requirements Fault tolerance is a set of system properties that enhance system reliability by detecting errors, recovering, and isolating bad consequences for the system. When designing any real system to ensure fault tolerance, it is necessary to anticipate all possible situations that can lead to a system failure and develop mechanisms for handling failures. Reliability is the ability of a system to withstand various failures and failures. Failure is the transition of the system as a result of an error to a completely inoperable state. A failure is an error in the operation of the system that does not lead to the failure of the system. The fewer failures and failures for a certain period of time, the more reliable the system is considered.




Typical components of the software product architecture and typical software requirements Possibilities of implementing the developed architecture. Possibilities of implementing the developed architecture. redundant functionality. redundant functionality. Deciding on the purchase of ready-made software components. Deciding on the purchase of ready-made software components. Change strategy. Change strategy.


Is the general organization of the program clearly described; whether the specification includes an overview of the architecture and its rationale. Is the general organization of the program clearly described; whether the specification includes an overview of the architecture and its rationale. Are the main components of the program adequately defined, their areas of responsibility and interaction with other components. Are the main components of the program adequately defined, their areas of responsibility and interaction with other components. Whether all the functions specified in the requirements specification are implemented by a reasonable number of system components. Whether all the functions specified in the requirements specification are implemented by a reasonable number of system components. Are the most important classes described and justified. Are the most important classes described and justified. Whether a description of the organization of the database is given. Whether a description of the organization of the database is given. Are all business rules defined? Are all business rules defined? Is their impact on the system described? Is their impact on the system described? A checklist of questions that allows you to make a conclusion about the quality of the architecture:


A checklist of questions that allows you to make a conclusion about the quality of the architecture: Is the user interface design strategy described. Whether the user interface design strategy is described. Whether the user interface is made modular so that changes to it do not affect the rest of the system. Whether the user interface is made modular so that changes to it do not affect the rest of the system. Whether a description of the I/O strategy has been provided. Whether a description of the I/O strategy has been provided. Whether a performance analysis of the system that will be implemented using this architecture has been carried out. Whether a performance analysis of the system that will be implemented using this architecture has been carried out. Has the reliability analysis of the designed system been carried out? Has the reliability analysis of the designed system been carried out? Has an analysis of the scalability and extensibility of the system been carried out? Has an analysis of the scalability and extensibility of the system been carried out?


Software refactoring Code is repeated; method implementation is too large; too much nesting of loops, or the loop itself is very large; the class has poor connectivity (properties and methods of the class should describe only 1 object); a class interface does not form a consistent abstraction; the method takes too many parameters. You should try to keep the number of parameters to a reasonable minimum; individual parts of the class change independently of other parts of the class; Refactoring involves adapting software to new hardware and new operating systems, new development tools, new requirements, and software architecture and functionality. This is a change in the internal structure of the software without changing its external behavior, designed to ensure the modification of the software. Reasonable reasons to refactor:


Software refactoring when changing the program requires a parallel change of several classes. If such a situation arises, it is necessary to reorganize the classes in order to minimize the places of possible changes in the future; have to change several inheritance hierarchies in parallel; you have to change several case blocks. It is necessary to modify the program in such a way as to make the implementation of the case block, and call it the required number of times in the program; related data members used together are not organized into classes. If you repeatedly use the same set of data elements, then it is advisable to consider combining this data and placing operations performed on them in a separate class;


A software refactoring method uses more elements of another class than its own. This means that the method needs to be moved to another class and called from the old one; the elementary data type is overloaded. To describe the essence of the real world, it is better to use a class than to overload any existing data type; the class has too limited functionality. It is better to get rid of this class by transferring its functionality to another class; "stray" data is passed along the chain of methods. Data that is passed to a method only to be passed to another method is called stray data. When such situations arise, try to change the architecture of classes and methods to get rid of them.


Refactoring the media object does nothing. If the role of a class is to redirect method calls to other classes, then it's best to eliminate that proxy and make calls to other classes directly; one class knows too much about another class. In this situation, it is necessary to make the encapsulation more strict to ensure that the heir has minimal knowledge of its parent; the method has an unfortunate name; data members are public. This blurs the line between interface and implementation, inevitably breaks encapsulation, and limits program flexibility; place comments in the source code;


A software refactoring subclass uses only a small fraction of the methods of its ancestors. This situation occurs when a new class is created only to inherit a few methods from the base class, and not to describe any new entity. In order to avoid this, it is necessary to transform the base class in such a way that it gives access to the new class only to the methods it needs; the code contains global variables. Only variables that are actually used by the entire program should be global. All other variables must either be local or must become properties of some object; the program contains code that may someday be needed. When developing a system, it is advisable to provide for places where source code can be added in the future.

In the first part, we chose to compare software development methodologies such indicators as the ratio of methodology to iterative development and the degree of formality in the design of working materials and in general in development. In this part, we use these metrics to compare the most well-known software development practices.

We'll see how it goes…

Alas, this is the most difficult category to describe - after all, it includes both the product of convulsive throwing of a beginner trying to complete his first project at any cost, as well as quite mature and well-established methodologies that have absorbed many years of and varied experience of specific development teams and even described in internal regulations. Since people who are able to develop their own methodology, as a rule, can themselves evaluate it in terms of iteration and formalization, we will focus on beginners. Alas, most often this means that the rules for conducting development either do not exist at all, or they have been developed and adopted, but are not being implemented. Natural in such conditions is an extremely low level of development formalism. So this is all clear.

Development "How it works"

How about an iterative approach? Alas, as a rule, it is not used in such projects. First of all, because it would allow even at the first iterations to evaluate the project as extremely dubious and requiring urgent intervention from higher management to restore order. After all, in an iterative project, the programmer’s traditional answer that everything is 90% ready for him only lasts until the completion of the first iteration…

Structural Methodologies

Structural Methodologies

Structural methods are a group of methodologies developed, as a rule, even before the widespread use of object-oriented languages. All of them involve waterfall development. Although, as it turned out, back in that article, which is often cited as the first presentation of the waterfall approach, it was said that it is desirable to start the project with the development of a prototype, that is, to perform at least two iterations.

Nevertheless, the basis of these methodologies is the sequential transition from work to work and the transfer of the results (documents) of the next stage to the participants of the next one.

Also, all these methodologies assume a highly formalized approach, although statements about a reasonable amount of documentation can be found in them. One of the non-obvious examples of the fact that software development methodologies developed not only in the West is a quote from a book published in our country in the early 1980s, which states that the degree of formalization of a programming task should be determined based on how well the analyst and programmer. And this despite the fact that the theme of the book involved the development of quite critical, as they are now called, systems, errors in which lead to serious losses or even disasters.

Agile Methodologies

Agile methodologies are based on ten principles, of which we will name only those that determine the assessment of these methodologies according to the selected parameters:

  • the main thing is to satisfy the customer and provide him with the product as soon as possible;
  • new releases of the product should appear every few weeks, in extreme cases - months;
  • the most effective way to transfer knowledge to the development participants and between them is personal communication;
  • a running program is the best indicator of development progress.

Thus, these methods are clearly focused on iterative software development and minimal formalization of the process. However, it is necessary to make a reservation regarding the second point: these methods are focused on the minimum level of formalization acceptable for a given project. At least one of the methodologies included in the flexible group - Crystal - has modifications designed to perform processes with a different number of participants and different criticality of the developed software (software criticality is determined by the possible consequences of errors, which can vary from minor financial losses to fixing a bug to a catastrophic one). So that further comparison with flexible methodologies is not pointless, we will give brief descriptions of several of them.

eXtreme Programming, or XP (extreme programming)

The XP methodology, developed by Kent Beck, Ward Cunningham, and Ron Jeffries, is the best known of the agile methodologies today. Sometimes the very concept of "agile methodologies" is explicitly or implicitly identified with XP, which preaches sociability, simplicity, feedback and courage. It is described as a set of practices: the game of planning, short releases, metaphors, simple design, code refactoring (refactoring), test-ahead development, pair programming, collective code ownership, 40-hour workweek, constant customer presence, and code standards. Interest in XP grew from the bottom up - from developers and testers, tormented by painful process, documentation, metrics and other formalism. They did not reject discipline, but they were unwilling to follow the formal requirements senselessly and looked for new fast and flexible approaches to the development of high-quality programs.

When using XP, careful preliminary software design is replaced, on the one hand, by the constant presence of the customer in the team, ready to answer any question and evaluate any prototype, and on the other hand, regular code revisions (so-called refactoring). Thoroughly commented code is considered the basis of design documentation. A lot of attention in the methodology is given to testing. As a rule, for each new method, a test is first written, and then the actual method code is developed until the test starts to run successfully. These tests are stored in suites that are automatically executed after any code change.

While pair programming and the 40-hour work week are perhaps the most well-known features of XP, they are still supportive in nature and contribute to high developer productivity and reduced development errors.

Crystal Clear

Crystal is a family of methodologies that determine the required degree of formalization of the development process, depending on the number of participants and the criticality of tasks.

The Crystal Clear methodology is inferior to XP in terms of performance, but it is as easy to use as possible. It requires minimal effort to implement because it focuses on human habits. It is believed that this methodology describes the natural order of software development, which is established in sufficiently qualified teams, if they are not engaged in the purposeful implementation of another methodology.

Key Features of Crystal Clear:

  • iterative incremental development;
  • automatic regression testing;
  • users are involved in active participation in the project;
  • the composition of the documentation is determined by the project participants;
  • as a rule, code version control tools are used.

In addition to Crystal Clear, there are several other methodologies in the Crystal family designed for larger or more critical projects. They differ in slightly more stringent requirements for the amount of documentation and supporting procedures, such as change and version control.

Feature Driven Development

Feature Driven Development (FDD) operates in terms of a system feature or feature that is fairly close to the RUP use case concept. Perhaps the most significant difference is an additional restriction: "each function must allow implementation in no more than two weeks." That is, if the use case is small enough, it can be considered a function, and if it is large, then it should be broken down into several relatively independent functions.

FDD includes five processes, with the last two being repeated for each feature:

  • development of a common model;
  • compiling a list of necessary system functions;
  • planning work on each function;
  • function design;
  • function construction.

The work on the project involves frequent builds and is divided into iterations, each of which is implemented using a specific set of features.

Developers in FDD are divided into "class masters" and "chief programmers". The main programmers involve the owners of the involved classes to work on the next property. For comparison: in XP there are no personally responsible for classes or methods.

Common features

The list of flexible methodologies is currently quite wide. Nevertheless, the methodologies we have described give a very complete picture of the entire family.

Almost all agile methodologies use an iterative approach, in which only a limited amount of work associated with the release of the next release is planned in detail.

Almost all agile methodologies are focused on the most informal approach to development. If the problem can be solved in the course of a normal conversation, then it is better to do just that. Moreover, it is necessary to draw up the decision in the form of a paper or electronic document only when it is impossible to do without it.

Agile Methodologies

GOSTs

GOSTs, like the CMM requirements described in the next section, are not methodologies. They, as a rule, do not describe the software development processes themselves, but only formulate certain requirements for the processes, to which various methodologies correspond to one degree or another. Comparing requirements according to the same criteria by which we compare methodologies will help you immediately decide which methodologies you should use if you need to develop in accordance with GOST.

At present, the old GOSTs of the 19th and 34th series and the newer GOST R ISO IEC 122207 are in force in Russia. The GOSTs of the 19th and 34th series are strictly focused on the cascade approach to software development. Development in accordance with these GOSTs is carried out in stages, each of which involves the performance of strictly defined work, and ends with the release of a fairly large number of very formalized and extensive documents. Thus, immediately strict adherence to these standards not only leads to a waterfall approach, but also provides a very high degree of formalization of development.

GOST requirements

GOST 12207, in contrast to the standards of the 19th and 34th series, describes software development as a set of main and auxiliary processes that can operate from the beginning to the end of the project. The life cycle model can be selected based on the characteristics of the project. Thus, this GOST does not explicitly prohibit the use of an iterative approach, but it does not explicitly recommend its use. GOST 12207 is also more flexible in terms of requirements for the formality of the development process. It contains only indications of the need to document the main results of all processes, but does not contain lists of required documents and instructions on their content.

Thus, GOST 12207 allows iterative and less formalized software development.

Development process maturity models (CMM, CMMI)

In addition to national and international standards, there are several approaches to certification of the development process. The most famous of them in Russia are, apparently, CMM and CMMI.

CMM (Capability Maturity Model) is a maturity model of software development processes, which is designed to assess the level of maturity of the development process in a particular company. According to this model, there are five levels of development process maturity. The first level corresponds to “how it goes” development, when developers go to each project as a feat. The second corresponds to more or less well-established processes, when it is possible with sufficient confidence to count on a positive outcome of the project. The third corresponds to the presence of developed and well-described processes used in the development, and the fourth corresponds to the active use of metrics in the management process to set goals and monitor their achievement. Finally, the fifth level refers to the company's ability to optimize the process as needed.

CMM and CMMI requirements

After the advent of CMM, specialized maturity models began to be developed for creating information systems, for the process of selecting suppliers, and some others. Based on them, an integrated CMMI model (Capability Maturity Model Integration) was developed. In addition, an attempt was made in CMMI to overcome the shortcomings of CMM that had manifested by that time - an exaggeration of the role of formal descriptions of processes, when the presence of certain documentation was valued much higher than just a well-established, but not described process. However, CMMI is also focused on using a highly formalized process.

Thus, the basis of the CMM and CMMI models is the formalization of the development process. They aim developers at the implementation of a process described in detail in the regulations and instructions, which, in turn, cannot but require the development of a large amount of project documentation for appropriate control and reporting.

The relationship of CMM and CMMI to iterative development is more indirect. Formally, neither of them put forward specific requirements for adhering to a waterfall or iterative approach. However, according to some experts, CMM is more compatible with the waterfall approach, while CMMI also allows for an iterative approach.

RUP

Of course, RUP is an iterative methodology. Although formally the mandatory execution of all phases or some minimum number of iterations is not indicated anywhere in RUP, the whole approach is focused on the fact that there are quite a lot of them. The limited number of iterations does not allow you to take full advantage of RUP. At the same time, RUP can also be used in almost cascading projects, which really include only a couple of iterations: one in the Build phase and the other in the Transfer phase. By the way, this is the number of iterations that is actually used in waterfall projects. After all, testing and trial operation of the system involves making corrections, which may involve certain actions related to analysis, design and development, that is, in fact, they are one more pass through all phases of development.

RUP Methodology

With regard to the formality of the methodology, RUP presents the user with a very wide range of possibilities. If you do all the work and tasks, create all the artifacts, and fairly formally (with an official reviewer, with the preparation of a full review in the form of an electronic or paper document, etc.) conduct all reviews, RUP can turn out to be an extremely formal, ponderous methodology. At the same time, RUP allows you to develop only those artifacts and perform only those works and tasks that are necessary in a particular project. And selected artifacts can be executed and reviewed with an arbitrary degree of formality. It is possible to require a detailed study and careful execution of each document, the provision of an equally carefully executed and formalized review, and even, following the old practice, to approve each such review at the scientific and technical council of the enterprise. Or you can limit yourself to an email or a sketch on paper. In addition, there is always one more possibility: to form a document in your head, that is, to think about the relevant issue and make a constructive decision. And if this decision concerns only you, then limit yourself, for example, to a comment in the program code.

Thus, RUP is an iterative methodology with a very wide range of possible solutions in terms of formalizing the development process.

Let's summarize the second part of the article. RUP, unlike most other methodologies, allows you to choose the degree of formalization and iteration of the development process in a wide range, depending on the characteristics of projects and the developing organization.

And why this is so important - we will discuss in the next part.

Top Related Articles