How to set up smartphones and PCs. Informational portal
  • home
  • Iron
  • Explicit column type. Changing string encoding from Unicode to ASCII

Explicit column type. Changing string encoding from Unicode to ASCII

Specifies that the new column is an identity column. When added to table newline the database engine generates a unique sequential value for this column. Identity columns are typically used with a PRIMARY KEY constraint to maintain the uniqueness of row identifiers in a table. The IDENTITY property can be assigned to tinyint, smallint, int, bigint, decimal(p,0), or numeric(p,0) columns. Only one identity column can be created per table. Default values ​​cannot be used in an identity column. Must be specified as initial value, and increment, or do not specify anything. If neither value is specified, the default value is (1,1).

Syntax:

[ IDENTITY [ (seed, increment) ]

seed- the value used for the very first row loaded into the table.

increment-z An increment value to add to the ID value of the previous loaded row.

[ personnel room] INT IDENTITY (1, 1) NOT NULL PRIMARY KEY

    Calculated fields.

Syntax:

<имя_столбца> AS <выражение> ]

An expression that specifies the value of a calculated column. A computed column is a virtual column that is not physically stored in a table unless it is set to PERSISTED. The value of a column is calculated from an expression that uses other columns in the same table.

For example, a calculated column definition might be:

Cost of goods AS Price for one * Quantity.

The expression can be a non-computed column name, a constant, a function, a variable, or any combination of these joined by one or more operators. The expression cannot be a subquery or contain data type aliases.

Calculated columns can be used in select lists, WHERE clauses, ORDER BY clauses, and anywhere else normal expressions can be used, with the following exceptions.

A calculated column cannot be used as a definition of a DEFAULT or FOREIGN KEY constraint, nor in conjunction with a NOT NULL constraint definition. However, a computed column can be used as the key column of an index, or as part of any PRIMARY KEY or UNIQUE constraint, as long as the value of that computed column is determined by a deterministic (predictable) expression and the data type of the result is allowed in the index columns.

For example, if the table contains integer columns a and b, calculated column a+b can be included in an index, and a computed column a+DATEPART(dd, GETDATE())- cannot, because its value may change on subsequent calls.

A computed column cannot be the target column of an INSERT or UPDATE statement.

The DBMS automatically determines whether computed columns are nullable based on the expressions used. Most expressions are considered to result in nullable values, even if only columns that are nullable are used, because a null value can be generated if overflow or loss of precision is possible. To find out if a calculated table column can be NULL, use the COLUMNPROPERTY function with the property AllowsNull. Make sure that the expression does not allow NULL values, you can specify ISNULL with a constant check_expression, where the constant is a non-null value that replaces any null value. Expression-based computed columns containing CLR user-defined types require REFERENCES permission on the type.

Specifies that the component SQL Server will physically store calculated values ​​in a table and update them when any column that the calculated column depends on changes. Specifying PERSISTED on a computed column allows you to create an index on a computed column that is deterministic but imprecise.

At table creation, in addition to the techniques discussed, you can specify the optional CONSTRAINT keyword to give the constraint a name that is unique within the database.

CREATE T ABLE Orders(

ID_OrderINT NOT NULL ,

Dish INT NOT NULL,

Quantity_ servings INT NOT NULL CHECK( Quantity_ servings >0 ) ,

the date DATE NOT NULL ,

PRIMARY-KEY( ID_Order, Dish, the date),

CONSTRAINT FOREIGN_KEY

FOREIGN KEY ( Dish, the date) REFERENCES Menu( Dish, the date)) ;

Then you can work with the named constraint as with a database object. If the constraint name is not specified, the DBMS kernel will create it automatically by selecting a name according to the rules defined in the system.

An example of a script built by the system to create a table Lookup_dish_type

CREATE TABLE .[Dish_type_reference](

IDENTITY(1,1) NOT NULL,

[View] (20) NOT NULL,

CONSTRAINT PRIMARY KEY CLUSTERED

) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON

You have seen some of the configuration conventions that this approach uses. In particular, you saw how to customize the entity model using data annotations expressed through C# attributes and using the Fluent API. In this and subsequent articles, we will take a closer look at these ways to customize the data model.

Customizing column types

Earlier in the Code-First example, you saw the use of some metadata attributes that allowed you to customize the data type of a column in a database table and apply restrictions to it (for example, specifying whether it supports NULL values). We'll look at these attributes in more detail next.

Length limit

The table below shows the column length conventions, their implementation as annotations and in the Fluent API:

The length limit can be imposed on strings or arrays of bytes. By convention, Code-First only uses a maximum length constraint, which means that SQL Server sets the data type for strings and byte arrays to NVARCHAR(n) and VARBINARY(n), where n is the length specified in the constraint. By default, if no length constraint was applied to model properties, Code-First will set the maximum possible column lengths to NVARCHAR(max) and VARBINARY(max).

As shown in the table, using data annotations, you can set a minimum length for a model property using the MinLength attribute - this restriction will not affect the database table in any way. (As described earlier, metadata attributes can be used not only in the Entity Framework, but also, for example, in ASP.NET model validation.) This is why the HasMinLength() method is missing from the Fluent API, because this API is part of the Entity Framework and is responsible for setting up Code-First-only conventions.

It is worth noting that you can specify the maximum and minimum field lengths in one StringLength attribute using the named parameters of this attribute. AT following example shows the use of a length limit using annotations (here we use the sample model we created in the article “Using Code-First” earlier):

Using System; using System.Collections.Generic; using System.ComponentModel.DataAnnotations; namespace CodeFirst ( public class Customer ( public int CustomerId ( get; set; ) public string FirstName ( get; set; ) public string LastName ( get; set; ) public string Email ( get; set; ) public int Age ( get; set ; ) public byte Photo ( get; set; ) // Link to orders public virtual List Orders ( get; set; ) ) public class Order ( public int OrderId ( get; set; ) public string ProductName ( get; set; ) public string Description ( get; set; ) public int Quantity ( get; set; ) public DateTime PurchaseDate ( get; set; ) // Reference to the buyer public Customer Customer ( get; set; ) ) )

And in the following code similar setting is done using the Fluent API (recall that to use this API, you need to override the configuration setting method OnModelCreating() in the context class, which in our example is the SampleContext class):

().Property(c => c.FirstName) .HasMaxLength(30); modelBuilder.Entity ().Property(c => c.Email) .HasMaxLength(100); modelBuilder.Entity ().Property(o => o.ProductName) .HasMaxLength(500); )

Explicit column type

As described earlier, Entity Framework automatically maps model data types to SQL compatible data types. Code-First allows you to control this process in order to explicitly specify the data type for a column, as shown in the example below:

Public class Customer ( public int CustomerId ( get; set; ) // ... public byte Photo ( get; set; ) // ... ) // same with Fluent API protected override void OnModelCreating(DbModelBuilder modelBuilder) (modelBuilder.Entity ().Property(c => c.CustomerId) .HasColumnType("smallint"); modelBuilder.Entity ().Property(c => c.Photo) .HasColumnType("image"); )

NULL support for a column

The Entity Framework convention for supporting null values ​​in a table column states that all nullable .NET types (objects) are mapped to SQL types with an explicit null statement, and vice versa, non-nullable .NET types ( structures) are mapped to SQL types with an explicit NOT NULL instruction.

To explicitly specify that a data type should not support null values, use the Required attribute in the data model, or use the IsRequired() method of the configuration object in the Fluent API. In order, on the contrary, to explicitly indicate that the data type must support NULL values, you need to use the Nullable collection or use C# syntax that, for nullable value types, has a question mark after the type (for example, int?).

The example below shows the use of these settings to configure nullable or non-nullable type information in a database table:

Public class Customer ( public int CustomerId ( get; set; ) public string FirstName ( get; set; ) public string LastName ( get; set; ) // This field can be NULL, // because we explicitly specified the type int?public int?age( get; set; ) // Similar to previous property public Nullable Age1 ( get; set; ) // ... ) // same with Fluent API protected override void OnModelCreating(DbModelBuilder modelBuilder) ( modelBuilder.Entity ().Property(c => c.FirstName).IsRequired(); modelBuilder.Entity ().Property(c => c.LastName) .IsRequired(); )

Note that the Fluent API can only be configured to support NOT NULL for reference data types, and cannot be configured to support NULL for value types, as NULL support for them is specified explicitly when declaring the property type in the model class.

Setting Primary Keys

The Entity Framework requires each entity model class to have a unique key (because each table in relational database data must use a primary key). This key is used in the context object to track changes to model objects. Code-First makes some assumptions when looking up a key in a table. For example, when we generated a database for the Customer and Order entity classes earlier, when considering the Code-First approach, the Entity Framework marked the CustomerId and OrderId fields in the tables as primary keys and set them to support non-nullable types:

Also, EF automatically added auto-increment support to these fields (recall that in T-SQL this is done using the IDENTITY statement). Most often, primary keys in a database are of type INT or GUID, although any primitive type can be used as a primary key. A primary key in a database can be made up of multiple table columns, similarly, an EF entity model key can be made up of multiple model properties. Later you will see how to set up composite keys.

Explicitly setting primary keys

In the case of our two classes Customer and Order, you should not worry about explicitly specifying the primary key, because we use the CustomerId and OrderId properties, which follow the key naming convention - “ + Id”. Let's look at an example where you need to explicitly specify a primary key. Add the following simple class to the model file:

Public class Project ( public Guid Identifier ( get; set; ) public DateTime StartDate ( get; set; ) public DateTime EndDate ( get; set; ) public decimal Cost ( get; set; ) )

Our goal is to indicate that the Identifier property in this class is the table's primary key. As you can see, the name of this property does not follow the Entity Framework naming convention for primary keys.

Add a description of the new table to the SampleContext context class:

Public class SampleContext: DbContext ( // ... public DbSet Projects ( get; set; ) // ... )

Now, if you run our application and insert the data of a new customer, an exception will be thrown in the DbModelBuilder class, due to the fact that it cannot correctly build the entity data model. (Recall that our application is a simple ASP.NET site that implements the ability to insert a new customer.)

This exception is caused because the Entity Framework did not find a property named Id or ProjectId in the Project table, and Code-First cannot figure out which field to use as the table's primary key. This issue can be fixed using data annotations or the Fluent API, as shown in the example below:

Public class Project ( public Guid Identifier ( get; set; ) public DateTime StartDate ( get; set; ) public DateTime EndDate ( get; set; ) public decimal Cost ( get; set; ) ) // same with Fluent API protected override void OnModelCreating(DbModelBuilder modelBuilder) ( modelBuilder.Entity ().HasKey(p => p.Identifier); )

Please note that when using Fluent API method HasKey() is specified after calling the Entity method () and not after calling Entity ().Property(), as it was done in the examples above, because the primary key is set at the table level, not at the property level.

Configuring Auto Increment for Primary Keys

As can be seen from the table, following the conventions, the Entity Framework specifies auto-increment for properties of type int. In the Project table we created earlier, the primary key type is Guid, so EF does not use the counter for this field when creating the table in the database. This is shown in the figure:

Let's add a new web form to our project, which we'll call DatabaseGenerated.aspx. In the Page_Load handler, add the following code where we add new data to the Project table. AT this case this data will be added whenever we open our web form page in a browser.

Using System; using System.Data.Entity; using CodeFirst; namespace ProfessorWeb.EntityFramework ( public partial class DatabaseGenerated: System.Web.UI.Page ( protected void Page_Load(object sender, EventArgs e) ( // This setting is needed so that the database is // automatically deleted and re-created when the structure is changed models // (to make it convenient to test examples) Database.SetInitializer(new DropCreateDatabaseIfModelChanges ()); SampleContext context = new SampleContext(); Project project = new Project ( StartDate = DateTime.Now, EndDate = DateTime.Now.AddMonths(1), Cost = 8000M ); context.Projects.Add(project); context.SaveChanges(); ) ) )

Run the project and open the DatabaseGenerated.aspx web form. As a result, a new entry will be added to the Project table:

Neither the database nor the Entity Framework knows that we would like to create a new Guid for each new record, so a Guid containing all zeros will be automatically created. If you refresh the page in your browser (in fact, in this case, the code will try to insert new record into the table), then Entity Framework will return a SqlException that occurs because we are trying to insert a record with an ID that already exists in the table, i.e. in this case, the primary key constraint is triggered - it must be unique for each new record.

As a result, to decide this problem, we would need to generate a unique Guid in code. Tables that use auto-increment for primary keys don't have this extra work, because for each new record inserted, the counter creates a new value for the primary key by extracting the primary key value for the last record and adding 1 to it (if the IDENTITY(1,1) construct was used).

To solve this problem for keys of type other than int, you need to use the DatabaseGenerated metadata attribute, in the constructor of which you specify DatabaseGeneratedOption enum, which has three possible values:

None

The database does not generate any unique value for the primary key. In fact, using this option, you can disable the automatic addition of auto-increment to primary keys of type int.

Identity

When inserting values ​​into a table, the database will create unique value for the primary key.

Computed

Similar to Identity, with the only exception that the primary key will be generated not only when records are inserted into the table, but also when they are updated.

Modify the model class to tell the database to create a unique primary key:

Public class Project ( public Guid Identifier ( get; set; ) public DateTime StartDate ( get; set; ) public DateTime EndDate ( get; set; ) public decimal Cost ( get; set; ) )

Run the sample project and refresh the page several times to insert a few records into the table and verify that the exception is no longer thrown. The figure below shows the data added to the table:

Pay attention to automatically generated IDs for projects. The same effect can be achieved using the HasDatabaseGeneratedOption() method in the Fluent API:

Protected override void OnModelCreating(DbModelBuilder modelBuilder) ( modelBuilder.Entity ().Property(p => p.Identifier) ​​.HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity); )

Working with Complex Data Types

The Entity Framework has supported the ability to use complex types since the first version. In fact, a complex type in .NET is a class that can be referenced in a model class. A complex type has no key and can be used in multiple model objects. Let's look at an example of the following model:

Public class User ( public int UserId ( get; set; ) public int SocialNumber ( get; set; ) public string FirstName ( get; set; ) public string LastName ( get; set; ) public string StreetAddress ( get; set; ) public string City ( get; set; ) public string ZipCode ( get; set; ) )

In this class, the address of the user's residence can be distinguished in separate class and refer to it:

Public class User ( public int UserId ( get; set; ) public int SocialNumber ( get; set; ) public string FirstName ( get; set; ) public string LastName ( get; set; ) public Address Address ( get; set; ) ) public class Address ( public int AddressId ( get; set; ) public string StreetAddress ( get; set; ) public string City ( get; set; ) public string ZipCode ( get; set; ) )

By convention, Entity Framework will parse this model - as two separate tables. But our goal is to create a complex type from the Address class. The traditional way to create a complex type from the Address class is to remove the AddressId:

Public class Address ( // public int AddressId ( get; set; ) public string StreetAddress ( get; set; ) public string City ( get; set; ) public string ZipCode ( get; set; ) )

In addition to the rule that a complex type must not have a key, Code-First imposes two other rules that must be met in order to detect a complex type. First, a complex type should contain only simple properties. Second, a class that uses this type is not allowed to specify a collection type for a property of a complex type. In other words, if you want to use the complex type Address in the User class, then a property that has this type should not be marked as List

or use another collection.

As shown in the image below, after running the application, Code-First recognizes the complex type and creates special fields in the User table (don't forget to add the User declaration in the context class):

Notice how the fields describing the user's address are named: ComplexTypeName_PropertyName. This is the Entity Framework naming convention for complex types.

Customizing Complex Types Bypassing Code-First Conventions

What if your class that describes a complex type does not follow Entity Framework conventions, for example, you want to use the AddressId field in the Address class? If we now add this field to the Address class and run the project, then instead of one User table and the complex Address type, Entity Framework will create two tables related to each other by a foreign key. To fix this issue, you can explicitly specify ComplexType attribute in the model class or use ComplexType() method DbModelBuilder class in Fluent API:

Public class Address ( public int AddressId ( get; set; ) public string StreetAddress ( get; set; ) public string City ( get; set; ) public string ZipCode ( get; set; ) ) // same with Fluent API protected override void OnModelCreating(DbModelBuilder modelBuilder) ( modelBuilder.ComplexType

(); }

It was said above that a class that describes a complex type should only have simple properties (that is, not referring to other objects). This agreement can be overcome using all the same means. The following is an example where a new UserInfo complex type has been added that references another FullName type:

Public class User ( public int UserId ( get; set; ) public UserInfo UserInfo ( get; set; ) public Address Address ( get; set; ) ) public class UserInfo ( public int SocialNumber ( get; set; ) // Not Simple property public FullName FullName ( get; set; ) ) public class FullName ( public string FirstName ( get; set; ) public string LastName ( get; set; ) ) public class Address ( public int AddressId ( get; set; ) public string StreetAddress ( get; set; ) public string City ( get; set; ) public string ZipCode ( get; set; ) )

With Code-First specifying that UserInfo is a complex type with the ComplexType attribute, we've overcome the limitation imposed on complex types by using the default convention.

It's worth noting that Code-First allows you to customize complex types just like normal tables using the Fluent API or annotations. The following is an example to set up the Complex Address type:

Public class Address ( public int AddressId ( get; set; ) public string StreetAddress ( get; set; ) public string City ( get; set; ) public string ZipCode ( get; set; ) ) // same with Fluent API protected override void OnModelCreating(DbModelBuilder modelBuilder) ( modelBuilder.ComplexType

().Property(a => a.StreetAddress) .HasMaxLength(100); )

The figure below shows the structure of the User table. Here you can see how EF names complex type properties with references inside, and how EF places a constraint on the StreetAddress field:

Description of other settings

In this section, we will briefly review all the remaining table column settings, which are used quite rarely due to their specific features.

Timestamp columns

The TIMESTAMP data type in T-SQL specifies a column, defined as VARBINARY(8) or BINARY(8), depending on the column's null property. For each database, the system maintains a counter whose value is incremented whenever any row containing a TIMESTAMP type cell is inserted or updated, and assigns to that cell given value. Thus, with the help of cells of type TIMESTAMP, you can determine the relative time last change corresponding rows of the table. (ROWVERSION is a synonym for TIMESTAMP.)

By itself, the value stored in a TIMESTAMP column is of no importance. This column is typically used to determine if a particular table row has changed since it was last accessed. This allows you to deal with issues of concurrent access to a database table by allowing other threads to block if the current thread has changed values ​​in a row.

In Code-First, to indicate that a column should be of type TIMESTAMP, the TIMESTAMP of the same name should be used. timestamp attribute in annotations or IsRowVersion() method in the Fluent API as shown in the example below:

Public byte RowVersion ( get; set; ) // same with Fluent API protected override void OnModelCreating(DbModelBuilder modelBuilder) ( modelBuilder.Entity ().Property(p => p.RowVersion) .IsRowVersion(); )

A less common way to ensure safety when working with parallel threads is to specify a concurrency check for each column. This method can still be used in DBMS that do not support Timestamp/Rowversion types. When operating this way, the thread does not check to see if the entry in the table has changed, but simply blocks other threads from accessing it until it has completed the write process itself. To specify the columns to be tested for concurrency, use ConcurrencyCheck attribute in annotations, or IsConcurrencyToken() method in Fluent API.

Changing string encoding from Unicode to ASCII

By default Entity Framework will convert everything string types model data, such as string or char, to SQL string data types using the two-byte Unicode character set NVARCHAR or NCHAR. You can change this behavior and explicitly tell EF to use single-byte ASCII encoding - the VARCHAR and CHAR types will be used accordingly. For this you need to use IsUnicode() method with the false boolean parameter passed to it in the Fluent API. Annotations do not provide the ability to customize string encodings.

Specifying Precision for the Decimal Type

You can use HasPrecision() method passing it two parameters, which is used in the Fluent API. Data annotations in Code-First offer no alternative to this method. By default, the Entity Framework sets the precision to 18 and the scale to 2 for Decimal types.

The example below shows the use of this method on the Cost property of the Project table we created earlier, when considering primary keys:

Protected override void OnModelCreating(DbModelBuilder modelBuilder) ( modelBuilder.Entity ().Property(p => p.Cost) .HasPrecision(6, 3); )

What is IDENTITY

IDENTITY is not a data type. This is some additional property, a restriction imposed on integer data types in MS SQL Server. Those. this property can be applied to fields of the following type: tinyint, smallint, int, bigint, decimal(p,0), or numeric(p,0)

The FoxPro counterpart of this property is the Integer-AutoIncrement data type. Just don't assume that Integer-AutoIncrement is the field with the Identity property. Not at all. It's exactly analog. Basically, they are similar, but have a number of differences. This article will focus on the IDENTITY property in MS SQL Server.

Fields with the IDENTITY property have the following features:

  • Only one field with the IDENTITY property is allowed in one table
  • A field with the IDENTITY property cannot be edited. They are read-only.
  • The value of the field with the IDENTITY property is assigned automatically when a new record is created.

There are some more features, but they are already a consequence of the listed features.

The new value is the last used value plus some fixed value. Note that the new value is not based on the maximum value in existing records, but to the last used value. This means that the record with the last used value may not physically exist, however, this value will be used.

In other words, "holes" are perfectly acceptable in the sequence of values ​​for a field with an IDENTITY property. The list of values ​​is not contiguous at all.

As a rule, 1 is specified as an increment, but it can be any integer. Including the negative.

Due to these features of fields with the IDENTITY property, such fields are often used as primary keys. In other words, as fields, by the value of which it is always possible to uniquely identify a table entry.

Note that the IDENTITY property has no control over the uniqueness of the data. For example, if the field was originally of type INTEGER, and a number of values ​​were entered into it. And then the structure of the table was changed, and the IDENTITY property was imposed on this field, then new records may well have the same data that was already entered earlier in this table. Therefore, if a field with the IDENTITY property is used as a primary key, then an additional uniqueness constraint should be imposed on this field.

Disadvantage of using fields with the IDENTITY property as the primary key

However, despite the obvious advantages of using fields with the IDENTITY property as a primary key, they also have a serious drawback.

The value of fields with the IDENTITY property cannot be known until the record is physically created.

So what? What are the problems? Let's create a record and find out its new value.

The problem is that in order to find out the value of the field of any record, this record must first be found. And the search for a record is just carried out by the value of the primary key. The one whose value needs to be determined. Vicious circle: in order to read the value, this value must be known!

The data storage structure in MS SQL Server is fundamentally different from the data storage structure in DBF files. It does not contain such concepts as " physical number records", "next record", "last record", etc. That is, it is not possible to navigate to the "last record" to read the value of its primary key.

Moreover, although the new value of a field with an IDENTITY property is always greater than any of existing values(if the increment step is positive number), but it is also impossible to determine this new value by simply calculating the maximum of the existing values. No, the maximum value itself, of course, will be obtained. There is simply no guarantee that the resulting value is the value of the record that was created.

The point here is that, as a rule, MS SQL Server is used in multi-user applications. This means that several users can simultaneously create new entries. It turns out that one user created a new record, then started calculating the maximum value, and at that moment another user also created a new record. As a result, the first user as maximum value will get the value of the entry created by the second user.

So why not use fields with the IDENTITY property as the primary key? Not at all. Still, there are ways to determine the value of a field with the IDENTITY property in a new record.

How to determine the value of a field with the IDENTITY property in a new record

Actually, there are three fundamental strategies for determining the value of a field with the IDENTITY property in a new record that has just been created.

Now let's take a closer look at the advantages and disadvantages of each strategy.

The value returned by the @@IDENTITY system variable

MS SQL Server has a number of system variables, the value of which changes automatically when certain events occur. In particular, the value of the @@IDENTITY system variable is automatically set to the value of the field with the IDENTITY property of the last record created in the current connection. Those. creating new records in another connection (by another user) will not affect its value in this connection in any way.

Well, so here it is, the solution. Just after creating a new record, we read the value of the @@IDENTITY system variable and have the desired value.

In general, right. The only problem is that the @@IDENTITY system variable changes its value when an entry is created in any table.

In practice, this means that if an insert trigger is set on a table, in the body of which an INSERT command is given to create a record in another table, which, in turn, also has a field with the IDENTITY property, then the @@IDENTITY system variable will receive the value of the field from this second table.

In other words, you can rely on the value of the @@IDENTITY system variable, but remember that this variable is not tied to the value of a field in one table.

Value returned by SCOPE_IDENTITY()

In the version of MS SQL 2000 was introduced system function SCOPE_IDENTITY(). This function also returns the value of the field with the IDENTITY property of the last record created, but created within the current SCOPE.

It is quite difficult to adequately translate the term SCOPE into Russian. But, approximately, we can say this: SCOPE is one procedure or function. In other words, SCOPE_IDENTITY() will return the value of the field with the IDENTITY property of the last record created within the procedure where this function was called.

The trigger is already another SCOPE (another function), so it will not affect the return value of SCOPE_IDENTITY() in any way.

Even if two users called the same procedure at the same time, but each called the procedure in his SCOPE. Those. again, there is no conflict.

The disadvantages of this function include the fact that it should be called within that SCOPE, where the new record of the table of interest to us was created. And this is not always possible.

In other words, to use SCOPE_IDENTITY() correctly, you must always keep an eye on the scope of SCOPE. Often by creating special procedures.

Finding a new record by the value of other fields

If you remember, the main problem with determining the value of a field with the IDENTITY property is that this field is used as a primary key. Those. according to its value just also find the necessary record.

However, tables often have a field or set of fields that can also be used to uniquely identify a record. For example, if we are talking about the directory, then, of course, the directory has a "Name" field. It is also obvious that this field must be unique within the directory. Otherwise, the meaning of using the directory itself is simply lost. Why enter entries in the directory with the same value?

Why not use this "Name" as the primary key? Why do you need a field with the IDENTITY property at all? This is a topic for a separate discussion. In short, "Name" is for the user (external data) and IDENTITY is for database referential integrity (internal data).

The value of the field with the IDENTITY property in the new record is not known. But the meaning of the field "Name" in this new record is quite known. The user himself entered it! This means that after creating a new record, you can find this new record by the value of the "Name" field and read the value of the field with the IDENTITY property.

The only problem is that such a field or set of fields does not always exist to uniquely identify a record. This, by the way, is one of the reasons for introducing the so-called surrogate keys. Those same fields with the IDENTITY property.

If, nevertheless, you decide to use this strategy to search for a new record, be sure to impose a unique constraint on the "Name" field (or a set of fields you choose). That is, so that by chance there are no two records with the same value in this field.

How to work with fields with the IDENTITY property in FoxPro

We have finished with the theoretical part, now "let's try to take off with all this stuff." Those. let's decide how to use all this knowledge in FoxPro. Once again, we clarify the problem that needs to be solved.

A record is added to the MS SQL server table that has a field with the IDENTITY property. Immediately after creating a new record, you need to get the value of the field with the IDENTITY property on the FoxPro side.

FoxPro has three fundamental possibilities for organizing work with MS SQL Server

  • Using Remote View
  • Using the Cursor Adapter

Here we should dwell on what kind of event actually creates a record on the MS SQL server. Well, everything is clear with Pass-Trough. This is actually a direct command to the server to create a new record. But with Remote View and Cursor Adapter it's a little different.

The result of both Remote View and Cursor Adapter is a cursor. Those. some temporary table physically located on the client's machine. By default, this cursor automatically opens in optimistic row buffering mode (3) and can only be switched to optimistic table buffering mode (5). It is not possible to switch to pessimistic buffering mode or disable buffering altogether for this cursor

Consequently, a new record will first be physically created on the client machine in this very cursor. More precisely, in the buffer of this cursor. The physical creation of a record on MS SQL Server will occur only after the buffer has been flushed.

For string buffering, buffer flushing can happen automatically when you do one of the following:

  • Jumping (or trying to jump) to another entry
  • Cursor close
  • Switching to table buffering mode
  • By the TableUpdate() command

For table buffering, buffer flushing can only occur with the TableUpdate() command and nothing else.

No other actions and operations either with Remote View or with Cursor Adapter will lead to the creation of a new record on MS SQL Server. If during the execution of any operation it turned out that a new record was created on MS SQL Server, this means that the cursor was in the string buffering mode, and one of the events that caused automatic buffer flushing occurred.

For example, this can happen with the Requery() command for Remote View. But this does not mean that the Requery() command flushes the buffer. Not at all. It's just that one of the conditions for executing the Requery() command is to close the previously existing cursor. But this event will just cause an automatic buffer flush if the cursor is in line buffering mode.

To avoid such misunderstandings, switch the cursor to table buffering mode (5). In this case, you can always control the buffer flushing process.

However, it should be understood that even if you set the table buffering mode, change several records in the Remote View or Cursor Adapter, and then issue the TableUpdate() command, the buffer will still be flushed one record at a time. Those. not one command will be sent to the server, for example, for modification, but a set of commands for modifying each record separately.

With regard to the operations of creating a new record, this implies that in all events of the Cursor Adapter object always only one record is inserted at a time.

Direct use of Pass-Through technology via the SQLEXEC() function

With this way of working, the programmer, as it were, works directly with the MS SQL server. It forms all the commands sent to the server, receives the result, and processes it itself. In this case, it is not difficult to send an additional request to the server for the value of the SCOPE_IDENTITY function

LOCAL lcNewValue, lnResut lcNewValue = "(!LANG:New value" lnResut = SQLExec(m.lnConnectHandle,"INSERT INTO MyTab (Field1) VALUES (?m.lcNewValue)") IF m.lnResut>0 SQLExec(m.lnConnectHandle,"SELECT NewIdent=SCOPE_IDENTITY()","NewIdent") ?NewIdent.NewIdent ELSE LOCAL laError(1) =AERROR(laError) * Анализ массива laError для уточнения причины ошибки ENDIF !}

AT this example m.lnConnectHandle is a number, MS SQL server connection number, which is configured earlier. MyTab is a table that has a field with an IDENTITY property.

After executing the second query in the resulting NewIdent cursor in the NewIdent field of the first record, we will get the desired value. AT given syntax both the insert command and the call to the SCOPE_IDENTITY() function occur in the same SCOPE. Therefore, we get the desired value.

Using Remote View

Remote View is a kind of "add-on" over the Pass-Through technology. Essentially, when a new record is created, the same INSERT INTO command is executed. However, the problem is that even if we read the connection number on which Remote View is running, and then execute a query to determine the value returned by SCOPE_IDENTITY(), we will get NULL, because in this case the insert command and SCOPE_IDENTITY() are executed in different SCOPE. Therefore, there are only two ways to determine the value of a field with an IDENTITY property.

* Determining the value of the @@IDENTITY LOCAL system variable lnConnectHandle lnConnectHandle = CursorGetProp("ConnectHandle","MyRemoteView") SQLExec(m.lnConnectHandle,"SELECT [email protected]@IDENTITY","NewIdent") ?NewIdent.NewIdent * Defined by value of another field LOCAL lnConnectHandle, lcNickName lnConnectHandle = CursorGetProp("ConnectHandle","MyRemoteView") lcNickName = MyRemoteView.NickName SQLExec(m.lnConnectHandle,"SELECT TabId FROM MyTab WHERE NickName=?lcNickName","NewIdent") ?NewIdent.TabId

In both examples, MyRemoteView is the name of your Remote View. NickName is the name of the field in Remote View, by the value of which a new record is searched, and TabID is the same field with the IDENTITY property, the value of which must be determined

It is assumed that the creation of a new record has already taken place. Either the TableUpdate() command was explicitly issued, or the Remote View is in line-buffered mode and an attempt was made to navigate to another record.

And why in the second case was not used, it would seem, more obvious solution search after Requery()? Something like

REQUERY("MyRemoteView") SELECT MyRemoteView LOCATE FOR NickName = "New Value" ?MyRemoteView.TabID

There are several reasons for this.

First, Requery() involves re-executing the request. Those. it is assumed that the buffer of all Remote View entries has been flushed. But that may not be the case. For example, flushing the buffer one record at a time in a loop.

Secondly, usually Remote View contains additional terms selection of records, and a newly created record may not fall into the selection conditions at all. Those. after Requery(), the record, although it will be physically created on the server, will not be displayed in the Remote View itself.

Using the Cursor Adapter

The Cursor Adapter object was introduced in FoxPro starting with Visual FoxPro 8. It is designed to make it easier to work with remote data when doing some standard operations. In particular, the operations of creating a new record. Through the Cursor Adapter, you can implement all three options for getting the value of a field with the IDENTITY property in a new record.

The value returned by the @@IDENTITY system variable

After the buffer is flushed and a new record is physically created on MS SQL Server, the AfterInsert event will fire in the Cursor Adapter object. This is where you need to execute an additional request to the server and read the value of the @@IDENTITY system variable

PROCEDURE AfterInsert LPARAMETERS cFldState, lForce, cInsertCmd, lResult IF lResult=.T. && insert succeeded. We need to get the value of ID LOCAL currentArea currentArea=SELECT() TRY IF 1=SQLEXEC(this.DataSource,"SELECT [email protected]@IDENTITY","NewIdent") REPLACE TabId WITH NewIdent.NewIdent IN (This.Alias) USE IN IDRes ELSE LOCAL laError(1) =AERROR(laError) * Parsing the laError array to determine the cause of the error ENDIF FINALLY SELECT (currentArea) ENDTRY ENDIF ENDPROC

I think the code is clear enough and needs no explanation. After successfully creating a new record on the server using SQLExec() for the same connection, determine the @@IDENTITY value and write this value to the current cursor record in the key field.

In Visual FoxPro 9, a property was added to the Cursor Adapter object. InsertCmdRefreshCmd. This is a command that is sent to the server only after a successful insertion of a new record. Those. there is no need to verify the very fact of creating a new record.

Paired with another new property InsertCmdRefreshFieldList you can update based on the value of the @@IDENTITY system variable in a simpler way. To do this, you just need to do following settings CursorAdapter object

CursorAdapter.InsertCmdRefreshCmd = "SELECT @@IDENTITY" CursorAdapter.InsertCmdRefreshFieldList = "TabId"

Here TabId is not the name of the field with the IDENTITY property in the table itself on the MS SQL server, but the name of the cursor field received on the client side in which the contents of the field with the IDENTITY property are displayed. As a rule, these names are the same, but, in general, they may differ. Learn more about the property InsertCmdRefreshFieldList read below. In the section on finding a new record by the value of other fields.

Value returned by SCOPE_IDENTITY()

Here everything is somewhat more complicated. The fact is that it is impossible to act, just as in the case of defining the @@IDENTITY system variable. SQLExec() and Cursor Adapter work in different SCOPEs. Therefore, with this approach, SCOPE_IDENTITY() will always return NULL. To overcome this contradiction, special procedures are used.

First, you need to create on the MS SQL server itself such a stored procedure

CREATE PROCEDURE Get_ValueInt @ValueIn Int, @ValueOut Int OUTPUT AS SET NOCOUNT ON SELECT @ [email protected]

As you can see, the purpose of this procedure is simply to assign the value of an input parameter to an output parameter. You can now use this procedure to determine the value of SCOPE_IDENTITY() in the Cursor Adapter. To do this, in the event beforeInsert such a change is made

PROCEDURE BeforeInsert LPARAMETERS cFldState, lForce, cInsertCmd cInsertCmd = cInsertCmd + ; "; DECLARE @id int" + ; "; SELECT @id=SCOPE_IDENTITY()" + ; "; EXEC Get_ValueInt @id, [email protected]" ENDPROC

Just keep in mind that in this substitution MyTab is no longer the name of the table on the MS SQL server, but the name of the cursor created on the client. More precisely, the name that is written in the Alias ​​property of the Cursor Adapter itself. Accordingly, "TabId" is the name of the cursor field created on the client and containing the value of the field with the IDENTITY property

In this case, in fact, a dynamic stored procedure is formed. Those. not just a single INSERT command, but a sequence of commands. Commands are separated from each other by a semicolon, although this is not required at all. It is enough to separate commands from each other with a simple space.

Finding a new record by the value of other fields

If you have Visual FoxPro 8, then you need to use a method similar to the method used to find the value of the @@IDENTITY system variable. Those. in the AfterInsert method of the Cursor Adapter object, execute an additional query via SQLExec()

Starting with Visual FoxPro 9, additional properties have been added to the Cursor Adapter object to automate this process without writing additional code.

InsertCmdRefreshFieldList- list of fields whose value will be updated after Insert
InsertCmdRefreshKeyFieldList- a field, or a set of NOT key fields that also uniquely identify the record, which will be used to search for a new record

It should be noted that these properties specify not the names of the table fields of the MS SQL server itself, but the corresponding names of the fields of the cursor received on the client side. For the same example, it would look something like this:

InsertCmdRefreshFieldList = "TabID" InsertCmdRefreshKeyFieldList = "NickName"

In other words, based on the value of the NickName field, a record will be found in the table on the MS SQL server and the value of the TabID field will be read, after which this value will be written to a new record on the client side.


For a more detailed discussion of this topic, as well as examples of use, you can see here.



Vladimir Maksimov
Last update: 01.05.06

Top Related Articles