How to set up smartphones and PCs. Informational portal
  • home
  • Windows 7, XP
  • Data visualization in science and technology. Ways to visualize data

Data visualization in science and technology. Ways to visualize data

Computer graphics is a field of computer science that deals with algorithms and technologies for data visualization. The development of computer graphics is mainly determined by two factors: the real needs of potential users and the capabilities of hardware and software. The needs of consumers and the capabilities of technology are steadily growing, and today computer graphics are actively used in various fields. The following areas of application of computer graphics can be distinguished:

  1. Information visualization.
  2. Modeling of processes and phenomena.
  3. Design of technical objects.
  4. Organization of the user interface.

Information visualization

Most scientific articles and reports cannot do without data visualization. A decent form of data presentation is a well-structured table with the exact values ​​of a function depending on some variables. But often a more visual and effective form of data visualization is a graphical one, and, for example, in modeling and image processing, it is the only possible one. Some types of information display of different origins are listed in the following table:

Many programs for financial, scientific, technical calculations use these and some other methods of data visualization. Visual presentation of information is an excellent tool for scientific research, a clear and compelling argument in scientific articles and discussions.

Modeling processes and phenomena

Modern graphics systems are powerful enough to create complex animations and dynamic images. Simulation systems, also called simulators, try to get and visualize a picture of the processes and phenomena that occur or could occur in reality. The most famous and most complex example of such a system is a flight simulator, which is used to simulate the situation and the flight process when training pilots. In optics, simulators are used to simulate complex, costly, or hazardous phenomena. For example, simulation of imaging or simulation of processes in laser resonators.

Design of technical objects

Design is one of the main stages in the creation of a product in technology. Modern graphic systems allow you to visually visualize the designed object, which contributes to the early identification and solution of many problems. The developer judges his work not only by numbers and indirect parameters, he sees the object of design on his own screen. Computer systems make it possible to organize interactive interaction with the designed object and simulate the manufacture of a model from a plastic material. CAD systems greatly simplify and speed up the work of the design engineer, freeing him from the routine drafting process.

Organization of the user interface

In the last 5-7 years, the visual paradigm for organizing the interface between the computer and the end user has become dominant. The windowed graphical interface is built into many modern operating systems. The set of controls that are used to build such an interface has already been fairly standardized. Most users are already accustomed to such an organization of the interface, which allows users to feel more comfortable and increase the efficiency of interaction.

All this suggests that in the operating system itself, a sufficiently large number of functions must already be implemented to render controls. For example, the Windows operating system provides developers with a GDI (Graphics Device Interface). As practice shows, for some applications, the capabilities provided by the system API are quite enough for visualizing the processed data (constructing the simplest graphs, representing simulated objects and phenomena). But such drawbacks as low display speed, lack of support for 3D graphics do not contribute to its use for visualization of scientific data and computer modeling. Some scientific and technical programs with complex graphical output require functions for faster, more powerful and flexible visualization of calculated data, simulated phenomena, and designed objects.

Computer graphics technologies

In modern scientific and technical applications, complex graphics rendering is implemented using the OpenGL library, which has become the de facto standard in the field of 3D rendering. The OpenGL library is a highly efficient software interface to graphics hardware. This library can achieve the highest performance in hardware systems running on the basis of modern graphics accelerators (hardware that frees up the processor and performs computations required for rendering).

The library's architecture and algorithms were developed in 1992 by specialists from Silicon Graphics, Inc. (SGI) for proprietary Iris graphics workstation hardware. A few years later, the library was ported to many hardware and software platforms (including Intel + Windows) and today it is a reliable multi-platform library.

The OpenGL library is freely redistributable, which is its undoubted advantage and the reason for such widespread use.

OpenGL is not an object-oriented, but a procedural library (about a hundred commands and functions), written in C. On the one hand, this is a drawback (computer graphics are a fertile area for using object-oriented programming), but programmers working with OpenGL can work with in C ++, Delphi, Fortran and even Java and Python.

Several helper libraries are commonly used in conjunction with OpenGL to help customize the library in a given environment or to perform more complex, complex rendering functions that are implemented through primitive OpenGL functions. In addition, there are a large number of specialized graphics libraries that use the OpenGL library as a low-level basis, a kind of assembler, on the basis of which complex graphics output functions are built (OpenInventor, vtk, IFL and many others). The OpenGL user community can be found at www.opengl.org

Microsoft has also developed and proposes to use the DirectX multimedia library for similar purposes. This library is widely used in gaming and multimedia applications, but has not received widespread distribution in scientific and technical applications. The reason is, most likely, that DirectX only works under Windows.

Information visualization

According to the already established tradition, let's start with the definition.

Information visualization- presentation of information in the form of graphs, diagrams, structural diagrams, tables, maps, etc.

ecsocman.edu.ru

Why visualize information? "Stupid question!" - the reader will exclaim. Of course, text with pictures is perceived better than "gray" text, and pictures with text are perceived even better. It's not for nothing that we all love comics so much - after all, they allow us to literally grasp information on the fly, seemingly without applying the slightest mental effort! And remember how well you remember during your studies the material of those lectures, which were accompanied by slides!

The first thing that comes to our minds when we say "visualization" is ϶ᴛᴏ graphs and diagrams (here it is, the power of associations!). On the other hand, only numerical data can be visualized in this way; no one has yet managed to build a graph based on coherent text. For the text, we can build a plan, highlight the main thoughts (theses) - make a short summary. We will talk about the disadvantages and dangers of note-taking a little later, but now we will say that if you combine the plan and a short summary - "hang" the theses on the branches of a tree, the structure of which corresponds to the structure (plan) of the text - then we will get an excellent block diagram text͵ which will be remembered much better than any synopsis. In this case, the branches will play the role of those "tracks" - tracks connecting concepts and theses that we talked about earlier.

Remember how we built UML diagrams based on the description of the designed software system received from its future users? The resulting pictures were perceived by both clients and developers much easier and faster than a text description. In the same way, you can "depict" absolutely any text, not only the technical task for the development of the system. The approach described above allows you to visually present absolutely any text - be it a fairy tale, a technical task, a lecture, a fantasy novel or the results of a meeting - in the form of a convenient and easy-to-understand tree. You can build it as you like - if only you get a clear and understandable diagram, which would be nice to illustrate with appropriate drawings.

Such schemes are also convenient to use in communication when discussing any issues and problems. As practice shows, the absence of clear notation standards does not create absolutely any communication difficulties for the participants in the discussions. On the contrary, the use of non-verbal forms of information presentation allows you to focus on the key points of the problem. Τᴀᴋᴎᴍ ᴏϬᴩᴀᴈᴏᴍ, visualization is one of the most promising areas of increasing the efficiency of analysis, presentation, perception and understanding of information.

Wow, finally we are done with the tedious description of scientific theories, methods and techniques used to process, organize and visualize information! The previous part of the chapter greatly tired both the author and the readers, and nevertheless, it was necessary: ​​as a result, we saw that the features of our brain are already actively used by scientists in various fields of science, many things that seem familiar to us, - personal computers, user interfaces, knowledge bases, etc. - were originally built taking into account the associative nature of human thinking and its tendency to hierarchical representation and visualization of information. But the pinnacle and natural graphic expression of human thought processes is mind mapping, which we are finally discussing. And at the same time we will try to expand our understanding of the principles of visual thinking.

Information visualization - concept and types. Classification and features of the category "Information visualization" 2017, 2018.

With an increase in the amount of accumulated data, even when using arbitrarily powerful and versatile Data Mining algorithms, it becomes more and more difficult to "digest" and interpret the results obtained. And, as you know, one of the provisions of the DM is the search for practically useful patterns. A pattern can become practically useful only if it can be comprehended and understood.

Methods for visual or graphical presentation of data include graphs, charts, tables, reports, lists, structural diagrams, maps, etc.

Traditionally, visualization has been viewed as an aid in data analysis, but now more and more research is talking about its independent role.

Traditional imaging techniques can find the following applications:

• to present information to the user in a visual form;

Compactly describe the patterns inherent in the original dataset;

• reduce dimension or compress information;

recover gaps in the dataset;

• find noise and outliers in a dataset.

Imaging techniques

Imaging methods, depending on the number of measurements used, are accepted

classified into two groups:

• presentation of data in one, two and three dimensions;

• presentation of data in four or more dimensions.

Presentation of data in 4+ dimensions

Representations of information in four-dimensional and more dimensions are inaccessible to human perception. However, special methods have been developed for the possibility of displaying and perceiving such information by a person.

The most well-known methods of multidimensional information presentation:

• parallel coordinates;

 "Chernov's faces";

• petal charts.

Representation of spatial characteristics

A separate area of ​​visualization is visual presentation

spatial characteristics of objects. In most cases, such funds highlight individual regions on the map and designate them in different colors, depending on the value of the analyzed indicator.



The map is presented in a graphical interface view, displaying data in the form of a three-dimensional landscape, arbitrarily defined and positioned forms (bar graphs, each with individual height and color). This method allows you to visually show the quantitative and relational characteristics of space-oriented

data and quickly identify trends in them.

Data Mining process. Domain analysis. Formulation of the problem. Data preparation.

Data Mining process. Initial stages

The DM process is a kind of exploration. Like any research, this process consists of certain stages, including elements of comparison, typing, classification, generalization, abstraction, repetition.

The DM process is inextricably linked to the decision-making process.

The DM process builds a model, and in the decision-making process, this model is exploited.

Consider the traditional DM process. It includes the following steps:

• analysis of the subject area;

 statement of the problem;

• preparation of data;

• building models;

• validation and evaluation of models;

selection of a model;

• application of the model;

• Correction and updating of the model.

In this lecture, we will take a closer look at the first three stages of the Data Mining process,

the rest of the steps will be discussed in the next lecture.

Stage 1. Analysis of the subject area

Study is the process of cognition of a certain subject area, object or phenomenon with a specific purpose.

The research process consists in observing the properties of objects in order to identify and evaluate important, from the point of view of the subject-researcher, regular relationships between indicators of these properties.

The solution to any problem in the field of software development must begin with the study of the subject area.

Subject area- this is a mentally limited area of ​​reality, subject to description or modeling and research.

The subject area consists of objects that are distinguished by their properties and are in certain relationships with each other or interact in some way.

Subject area is a part of the real world, it is infinite and contains both

significant and insignificant data, from the point of view of the conducted research.

The researcher needs to be able to isolate a significant part of them. For example, when solving the problem "Should I issue a loan?" all data about the client's private life are important, including whether the spouse has a job, whether the client has minor children, what is his level of education, etc. For solving another problem of banking, these data will be absolutely unimportant. The materiality of the data thus depends on the choice of the subject area.

With an increase in the amount of accumulated data, even when using arbitrarily powerful and versatile Data Mining algorithms, it becomes more and more difficult to "digest" and interpret the results obtained. And, as you know, one of the provisions of Data Mining is the search for practically useful patterns. A pattern can become practically useful only if it can be comprehended and understood.

In 1987, at the initiative of the ACM SIGGRAPH IEEE Computer Society Technical Committee of Computer Graphics, in connection with the need to use new methods, tools and data technologies, the corresponding tasks of the visualization direction were formulated. Methods for visual or graphical presentation of data include graphs, charts, tables, reports, lists, structural diagrams, maps, etc. Traditionally, visualization has been viewed as an aid in data analysis, but now more and more research is talking about its independent role.

Traditional imaging techniques can find the following applications:

Ø present information to the user in a visual form;

Ø to compactly describe the patterns inherent in the original dataset;

Ø reduce dimension or compress information;

Ø repair gaps in the dataset;

Ø find noise and outliers in a dataset.

Data Mining Tool Visualization

Each of the Data Mining algorithms uses a specific visualization approach. In the course of using each of the methods considered above, or rather, their software implementations, we received some visualizers, with the help of which it was possible to interpret the results obtained as a result of the work of the corresponding methods and algorithms.

Ø For decision trees, this is a decision tree visualizer, a list of rules, a contingency table.

Ø For Kohonen cards: cards of inputs, outputs, other specific cards.

Ø For linear regression, the regression line acts as a visualizer.

Ø For clustering: dendrograms, scatter diagrams.

Scatter plots and plots are often used to assess the performance of a particular method.

All of these ways of visualizing or displaying data can perform one of the functions:

Ø are an illustration of building a model (for example, representing the structure (graph) of a neural network);

Ø help to interpret the obtained result;

Ø are a means of assessing the quality of the constructed model;

Ø combine the functions listed above (decision tree, dendrogram).

Visualization of Data Mining Models



The first function (illustration of building a model), in fact, is a visualization of the Data Mining model. There are many different ways of presenting a model, but graphing it gives the user maximum "value". The user, in most cases, is not a specialist in modeling, most often he is an expert in his subject area. Therefore, the Data Mining model should be presented in the language that is most natural for it, or, at least, contain a minimum number of various mathematical and technical elements.

Thus, availability is one of the main characteristics of the Data Mining model. Despite this, there is also such a widespread and simplest way of representing a model as a "black box". In this case, the user does not understand the behavior of the model he is using. However, despite the misunderstanding, he gets the result - the revealed patterns. A classic example of such a model is the neural network model.

Another way to represent a model is to present it in an intuitive, understandable way. In this case, the user can really understand what is happening "inside" the model. Thus, it is possible to ensure his direct participation in the process. Such models provide the user with the opportunity to discuss or explain its logic with colleagues, clients and other users.

Understanding the model leads to understanding its content. As a result of understanding, confidence in the model increases. A classic example is a decision tree. The constructed decision tree really improves the understanding of the model, i.e. used Data Mining tool.

In addition to understanding, such models provide the user with the ability to interact with the model, ask her questions and get answers. An example of this interaction is the what-if facility. Through the system-user dialog, the user can gain an understanding of the model.

Examples of visualization tools that can be used to assess the quality of a model are a scatter plot, a contingency table, a graph of the change in the magnitude of the error:

Ø Scatterplot is a graph of the deviation of the values ​​predicted by the model from the real ones. These charts are used for continuous values. Visual assessment of the quality of the constructed model is possible only at the end of the process of building the model.

Ø The contingency table is used to evaluate the classification results. Such tables are used for various classification methods. Assessment of the quality of the constructed model is possible only at the end of the process of building the model.

Ø Graph of the change in the magnitude of the error. The graph demonstrates the change in the magnitude of the error in the process of model operation. For example, during the operation of neural networks, the user can observe the change in the error on the training and test sets and stop training to prevent the network "overfitting". Here, the assessment of the quality of the model and its changes can be assessed directly in the process of building the model.

Examples of visualizers that can help you interpret the result are: trendline in linear regression, Kohonen maps, scatterplot in cluster analysis.

Imaging techniques

Visualization methods, depending on the number of measurements used, are usually classified into two groups:

1. Presentation of data in one, two and three dimensions

This group of methods includes well-known methods of displaying information that are available for perception by the human imagination. Almost any modern Data Mining tool includes visual presentation methods from this group.

According to the number of dimensions of the view, these can be the following ways:

Ø one-dimensional measurement, or 1-D;

Ø two-dimensional measurement, or 2-D;

Ø 3D or projection measurement, or 3-D.

It should be noted that the human eye most naturally perceives two-dimensional representations of information.

When using two- and three-dimensional presentation of information, the user has the opportunity to see the patterns of the data set:

Ø its cluster structure and the distribution of objects into classes (for example, on a scatter diagram);

Ø topological features;

Ø presence of trends;

Ø information about the mutual arrangement of data;

Ø the existence of other dependencies inherent in the studied dataset.

If the dataset has more than three dimensions, then the following options are possible:

ü use of multidimensional methods of information presentation (they are discussed below);

ü reduction of dimension to one-, two- or three-dimensional representation. There are various ways to reduce the dimension. Self-organizing Kohonen maps are used to reduce dimensionality and at the same time visualize information on a two-dimensional map.

2. Presentation of data in 4+ dimensions

Representations of information in four-dimensional and more dimensions are inaccessible to human perception. However, special methods have been developed for the possibility of displaying and perceiving such information by a person.

Annotation: In this lecture we will consider such issues: associations as the basis of the human brain, the concept of theories of processing, systematization and visualization of information, Mind mapping and visual thinking.

As mentioned above, the subject of this course is mind mapping - an effective technique for increasing personal productivity. But before discussing the areas of applicability of mind maps, the rules for constructing them and typical mistakes of their use, moreover, before trying to explain what mind mapping is in general, you need to talk about visual (or radiant) thinking, the embodiment and result of which are mind map "s.

Associations as the Basis of the Human Brain

Have you ever thought about the principles on which the work of those super-powerful computers that each of us carries inside our skulls is based? I'll bet - the first thought that crossed most readers' minds was about the microprocessors that power our laptops and workstations. However, vague suspicions about the incomparability of the "weight categories" of the silicon microchip and the brain still do not allow us to talk with confidence about how simple everything is - binary arithmetic, "there is an impulse, there is no impulse" and so on. Yes, as a model of how the brain works, a binary machine is quite acceptable, but this is a very crude model (we remember that any model reflects only one property of an object that is most important in this context, right?). It turns out somehow too primitive - to reduce our thinking to zeros and ones. How, then, can we explain that cascade of small memories - sensations, colors, smells, ideas that sweep before our mind's eye when we think about something? Many of these images for most strangers have nothing to do with the subject of our reflections and mean something specific only for them, since they are associated with some kind of personal memories and experiences. Allow yourself to think about something and do not adhere to any particular line of thought - you will be surprised at how quickly and far you will get away from the original theme of reflection: alternating images, connected, like links of one chain, pulling each other out of the bins of memory, will quickly lead you away from the object you are thinking of. Of course, you can try to explain this behavior of our brain by the fact that it simply works out an ingeniously complex branched program for processing information taking into account the data already stored in memory, but everything is far from so simple.

Any information that enters our brain (no matter what it is - a touch, taste, smell, color, sound), pulls out into the light of God a lot of small memories, thoughts and sensations, just like a stone that has fallen into a pond spreads over the surface water concentric circles. And each of these memories pulls along a lot of others, which, in turn, bring to life more and more new images, thoughts or ideas. Yes, I understand that I have already tired the reader a little with my lengthy reasoning. And the essence of them was that ones and noughts, perhaps good for explaining how our brain works at the "physical level", but if we are talking about how it works, then we should not talk about bits, but about associations as minimal units of information processing by the human brain... Remember the concept of a lexeme as a minimal language unit that has an independent meaning? So, in the language in which our brain "speaks", such lexemes are associations. What is an association?

Association:

  • in physiology - the formation of a temporary connection between indifferent stimuli as a result of their multiple combination in time;
  • in psychology - a natural connection between individual events, facts, objects or phenomena reflected in consciousness and fixed in memory.

In the presence of an associative connection between mental phenomena A and B, the appearance of phenomenon A in a person's consciousness naturally entails the appearance of phenomenon B.

So, each association is associated with a huge number of new associations, which, in turn, are associated with new and new concepts. Thus, thinking can be represented in the form of a complex associative algorithm, a kind of slalom along the branches of the association tree, diverging from the trunk - the main idea. At one time, Professor Anokhin (http://ru.wikipedia.org/wiki/Anokhin_Pyotr_Kuzmich) said that the brain's ability to form associative connections far exceeds its ability to store information. As for the information capacity of the brain, it is also very impressive - Dr. Mark Rosenzweig (http://en.wikipedia.org/wiki/Mark_Rosenzweig) wrote that even if a person remembered 10 units of information (word, image or another elementary impression) every second for 100 years, it would be possible to fill less than one tenth of the total volume of human memory. And no matter how many such units of information are stored in our head, the number of associations associated with them is still several orders of magnitude higher! The potential of the human brain associated with the creation of associations is truly limitless: all our ideas, memories and sensations are stored in our head in the form of a kind of "tracks" - winding branching paths that connect them with our other thoughts.

Here's an example of what's usually going on in our heads:

A very familiar picture, isn't it?

Thus, our brains work on two fundamental principles.

  • Associative thinking- the connection of each memory with a host of other images, and it is about this principle that we have talked about for the last ten minutes.
  • Hierarchy of concepts- in each such associative "track" one of the images is the main (root) one, from which branches-paths diverge to other concepts, ideas, memories. As a result, we get a certain tree (or graph) of images associated with the original concept.

If we try to combine these two principles (which work in combination, complementing each other), then it should be said about the so-called radiant, or visual, thinking... We will talk about him in the same lecture, but a little later. In the meantime, we will try to figure out what theories of processing, systematization and visualization of information exist at the moment, and whether they have any similarities with the principles of the human brain we have described above.

The concept of theories of information processing, systematization and visualization

Existing theories of information processing

Let's start with the definitions.

Data processing- any transformation of information from one type to another, carried out according to strict formal rules.

Information-processing theory- a direction of scientific knowledge that studies how people handle information, select and assimilate it, and then use it in the process of making decisions and managing their behavior.

Information processing theories are used in the study of perception, memory, attention, speech, thinking and problem solving in experimental psychology. In turn, mathematical logic, communication technology, information theory and the theory of computing systems made a great contribution to the development of the above theories. Why do we say "theories" in the plural? The point is that, in fact, we should talk about a whole family of completely disparate theoretical and research programs. Naturally, as in any scientific community, there is no trace of agreement between researchers - the opinions of scientists agree only on some initial premises, theory and research methodology. Within the framework of this family, one can distinguish such approaches widely known in narrow circles as transformational linguistics (http://ru.wikipedia.org/wiki/Generative_Linguistics), Piaget's psychology (http://www.gumer.info/bibliotek_Buks/Psihol/ Jaroschev / 11.php) and radical behaviorism. Behaviorism, in particular, studied animal behavior and actively extended its principles to all areas of psychology. However, some difficulties arose when trying to extend the theory and methods of behaviorism to symbolic processes of a person, in particular, to linguistic abilities. As the frustration of scientists with the usual methods became universal, researchers-psychologists turned to other theories, as a result of which behaviorism was almost forgotten. Nevertheless, scientists who develop theories of information processing share with their predecessors the behaviorists a belief in empiricism, operationalism, etc. Yes, psychologists refused to extend to people the conclusions obtained as a result of experiments with animals, and from explaining the apparent behavior of individuals by external causes, in particular, environmental influences. At the same time, the general methodology and statistical methods for processing the results of experiments remained the same - it was just that animals were replaced by people as subjects. The learned fraternity again recognized the existence of innate abilities and began to actively discuss such internal processes as plans, strategies, images, decisions and associations.

The twentieth century was marked by the rapid development of communication technologies - telephony, radio and television. The analogy between the processing of information by the human brain and the work of the information channel described in the theory of communication was very revealing. The research of Claude Shannon (a familiar name, isn't that so?) Played an important role in the creation of mathematical information theory and the transfer of the concepts of communication theory to the work of the human brain. The theory he created describes the transmission of messages of any nature from any source to any recipient, including the transmission of signals inside the human brain.

But let us recall another incomprehensible name that we mentioned at the beginning of this section - transformational linguistics. At one time, Noam Chomsky (http://ru.wikipedia.org/wiki/Chomsky_Noam) argued that the human language cannot be scientifically explained from the standpoint of behaviorism. He insisted that this approach completely misrepresents the nature of language, ignoring its structure, rules, and grammar. Instead, he talked about the "rules in the head" of a person that allow transforming (transforming) the transmitted information - breaking it down into semantic units (words) and linking these units together. Moving away from behaviorism, the new paradigm of information processing in search of ideas increasingly leaned towards linguistics. So modern researchers are trying to discover the psychological processes or mental operations that underlie language activity. Such types of cognitive activity as perception, memory, thinking and understanding are being actively studied. And again, the concept of association did not stand aside.

As for the theory of computing systems, this name also hides a whole brood of absolutely motley disciplines. This includes the theory of algorithms, numerical methods, the theory of finite automata, programming languages, the theory of artificial intelligence and much more ... And this is not the only feature that makes the theory of computing systems similar to the psychology of information processing - both directions grew out of mathematical logic, both were engaged in the study of nature reasonable behavior, and the emergence of computers and the development of the principles on which they were based, led to the emergence of another analogy of human mental and intellectual abilities. Machine models have helped in the study of thinking and, in particular, the process of solving problems. Based on this analogy, psychologists try to explain how the brain receives information, recodes and stores it in memory, how it then uses it to make decisions and control behavior. Of course, there is no complete correspondence between the work of the brain and a computer, and there cannot be, but nevertheless, scientists have managed to create a coherent concept that can explain how an intelligent system - whether it be a person or a certain device - creates new knowledge. Guess what concept plays the most important role here? Yes, of course you are right - this is a concept associations!

Systematization and structuring of information

So, we figured out the processing of information, let's move on to systematization. Of course, we do not forget that the systematization of information is an integral part of the information processing algorithm, a certain stage of it, but all the same, this stage must be said separately. As always, let's look at the definition first:

Systematize- to distribute the elements of information on the basis of kinship, similarity, that is, to classify and typify them.

The human brain (in the context of the processes of perception, memorization, transformation of information, etc.) works precisely with systematized information. For example, the memorization process is much more effective if a person manages to rationally structure the information he receives, sort it out on the shelves, as the people say. In communication processes (remember, we talked about language and linguistics?), The systematized presentation of the transmitted information also plays an important role. Systematization and structuring information - the most important psychological mechanisms by which the human brain can efficiently process large flows of information.

The desire for a holistic coverage of the object of study, for the systematization of knowledge is characteristic of any process of cognition. Many researchers noted that the process of brain work on a problem goes from understanding the properties, characteristics and functions of the object of study to the search for missing structural elements, connections and relationships between them. And if you master a systematic approach and develop your ability to systematize and structure information, you can help the brain work more efficiently in the learning process and in solving professional problems.

Data structures are different - linear (list), tabular, hierarchical (tree). Trees (graphs) of concepts, built on the basis of associative links, are the most natural way for our brain to represent (structure) data (although, strictly speaking, associative and classification relations should not be confused). Thinking about visual thinking? By the way, since we're talking about trees, it's time for us to smoothly move on to considering the issue of information visualization. But first, we note that there is a whole area of ​​scientific knowledge that studies methods and techniques for structuring information, which is called information architecture... The classics say that

information architecture- how science deals with the principles of organizing information and navigating through it in order to help people more successfully find and process the data they need.

The first thing that comes to our minds when we say visualization is graphs and diagrams (here it is, the power of associations!). On the other hand, only numerical data can be visualized in this way; no one has yet managed to build a graph based on connected text. For the text, we can build a plan, highlight the main thoughts (theses) - make a short summary. We will talk about the disadvantages and dangers of note-taking a little later, but now we will say that if you combine the plan and a short summary - "hang" the theses on the branches of a tree, the structure of which corresponds to the structure (plan) of the text, then we will get an excellent block diagram text that will be remembered much better than any synopsis. In this case, the branches will play the role of those "tracks" - tracks connecting concepts and theses that we talked about earlier.

Remember how we built UML diagrams based on the description of the designed software system received from its future users? The resulting pictures were perceived by both clients and developers much easier and faster than a text description. In the same way, you can "depict" absolutely any text, not only the technical task for the development of the system. The approach described above allows you to visually present absolutely any text - be it a fairy tale, a technical task, a lecture, a fantasy novel or the results of a meeting - in the form of a convenient and easy-to-understand tree. You can build it as you like - if only you get a clear and understandable diagram, which would be nice to illustrate with appropriate drawings.

Such schemes are also convenient to use in communication when discussing any issues and problems. As practice shows, the absence of clear notation standards does not create absolutely any communication difficulties for the participants in the discussions. On the contrary, the use of non-verbal forms of information presentation allows you to focus on the key points of the problem. Thus, visualization is one of the most promising areas for increasing the efficiency of analysis, presentation, perception and understanding of information.

Wow, finally we are done with the tedious description of scientific theories, methods and techniques used to process, organize and visualize information! The previous part of the chapter greatly tired both the author and the readers, and nevertheless, it was necessary: ​​as a result, we saw that the features of the work of our brain are already actively used by scientists in various fields of science, many things that seem familiar to us are personal computers, user interfaces, knowledge bases, etc. - were originally built taking into account the associative nature of human thinking and its tendency to hierarchical representation and visualization of information. But the pinnacle and natural graphic expression of human thought processes is mind mapping, which we are finally discussing. And at the same time we will try to expand our understanding of the principles of visual thinking.

Top related articles