Monday, 27 October 2014

SQL Server 2014 new In-memory Tables

I have summarised basic concepts of In-Memory OLTP from various Microsoft articles which will help you to start up with the basics,

SQL Server 2014 new In-memory Tables:-

Microsoft has come up new concept called In-Memory OLTP with the introduction of SQL Server 2014 for performance optimization on OLTP database servers. The traditional SQL Server stores the tables in the disk and SQL OS moves the data pages from the disk to the memory (SQL Server Buffer) depending based on the query executed, most of the times the frequently accessed tables pages in the memory are paged to disk to accommodate other requested data pages resulting in frequent paging and high PAGIOLATCH waits. With the cost of hardware reducing over the years In-Memory OLTP gives us the option to store the frequently used table in memory instead of disk. In-Memory architecture is fully integrated with SQL Server and the Memory Optimized tables are fully transactional, durable, and are accessed using Transact-SQL in the same way as disk-based tables. 


In-Memory Architecture


Legacy SQL server did have similar function called DBCC PINTABLE which pins the data pages of the table to memory and these pages does not flush to the disks, but this was a SQL Server internal function meaning the pages are pinned to SQL Server Buffer cache which most of times creates a memory crunch for other data which needs processing which resulted in more of a problem than resolution this function was depreciated in SQL Server. In-Memory OLTP architecture unlike DBCC PINTABLE has its own memory space for storing memory optimized tables and indexes and don’t use Buffer cache which can be still used effective for disk based table operation. SQL server can interact with memory optimized table and a disk based table in same transaction meaning you can have both memory-optimized tables and disk-based tables in the same database, and query across both types of tables.



Transactional Concurrency:-

One of major advantages of the memory optimized table is locks on the table are very minimal irrespective of the isolation level of the transaction. Take for example in READ SNAPSHOT ISOLATION on a disk based table does not block a SELECT operation during a data modification due to row versioning of the data in tempdb database but this still blocks any process which tries to update/delete the data simultaneously but in Memory optimized architecture there are no locking if two transactions attempt to update the same row, rather it is managed by a conflict management. memory-optimized tables use multi versioning meaning each row has different version instead of single row version used in snapshot isolation, Also unlike snapshot isolation which stores the row version in tempdb, memory optimized tables row version are stored in memory. This architecture allows even higher isolation levels to have transactional concurrency. As a part of conflict management SQL server detect conflicts between concurrent transactions, If a conflict is detected, the transaction is terminated and the client needs to retry.


Durability:-

Memory optimized Tables can be configured as both fully durable(SCHEMA_AND_DATA) and non-durable(SCHEMA_ONLY). Memory optimized tables configured fully durable(SCHEMA_AND_DATA) can be recovered  without any data loss on an SQL Server restart whereas Tables configured with SCHEMA_ONLY durability will be able to recover only the table metadata on a SQL server restart, these tables can be typically your staging tables used in ETL process which can be flushed on database restart like tempdb tables.

Like Disk-based tables all changes are logged to a transaction log for durable memory optimized tables and the SQL server writes the transaction to the disk. Now this where it can be misleading on why the data is written to disk when the table is in memory, Memory-optimized tables is fully integrated with the same log stream used by disk-based tables, to recover the table point in time and to allow operations like T-Log, backup and restore without any additional steps, Memory optimized table maintains a copy of table and data on disk on a different data structure stored as free form data rows with the help of two files called data and delta files. The data and delta files are located in one or more containers (using the same mechanism used for FILESTREAM data). These containers are mapped to a new type of filegroup, called a memory-optimized filegroup. On a SQL Server restart memory optimized tables are flushed back into memory from the data and delta files for fully durable table.


Implementation:-

Create a database  with a memory optimized filegroup and container

CREATE DATABASE MEMORYTABLETEST
ON PRIMARY
(NAME = ‘MEMORYTABLETEST_PRIMARY’, FILENAME = ‘C:\Data\ MEMORYTABLETEST_PRIMARY.mdf)
FILEGROUP INMEMORTYOLTP CONTAINS MEMORY_OPTIMIZED_DATA
(NAME = ‘MEMORYTABLETEST_INMEMORY’, FILE = ‘C:\InMemory\Container’)
 
 
Create a Durable memory optimized table
 
CREATE TABLE dbo.durabletable_memory 
( Col1 int PRIMARY KEY,
  Col2 char(200) NONCLUSTERED HASH (Id) WITH (BUCKET_COUNT = 1024) 
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
 
   
Memory Optimization Indexes

Memory optimized tables supports indexes, Memory-optimized indexes exist only in memory and index operation are not logged into transaction logs.

There are two types of memory-optimized indexes

Non-clustered Hash Index
 Non clustered Index

Hash index architecture is different from traditional index and it uses a data structure called buckets to store index and data.
Each memory-optimized table must have at least one index. Note that each PRIMARY KEY constraint implicitly creates an index. Therefore, if a table has a primary key, it has an index. A primary key is a requirement for a durable memory-optimized table.


Limitations

In-Memory OLTP will improve performance best in OLTP with short-running transactions, but to gain the performance improvement we need to select the right table to be optimized otherwise it can cause huge performance issues, being a new component memory optimized OLTP has lot of function limitation which may be improved in future versions. For example cross database transaction, replication, mirroring, database snapshots is not supported and there are also various limitations on operation you can perform on these tables. But if a right table is configured to use this feature Microsoft suggests we may have have 5-20 times performance improvement which is a huge leap.

Wednesday, 8 October 2014

How to create a SSRS DR/Copy Report Server Database across different servers

Imagine you have installed and initialized SSRS in your Primary and DR Environment and To Mirror the DR environment to the Primary SSRS follow the below steps,

  1. Copy the ReportServer and ReportServerTempDB to the DR Server

Note:- If you just do the database restore and try to access the SSRS you get the error “The report server installation is not initialized. (rsReportServerNotActivated)”, You need to follow the below steps to configure the DR SSRS with the existing database.

2.Backup Encryption Key for the Primary SSRS





3.  Copy the Encryption Key File to DR SSRS Server, In my case say C:\Test\ssrsencryptkey.snk

4.  Run the following Query in ReportServer database of the DR server
'DELETE FROM [Dbo].[Keys] WHERE client > -1'

5.  Now search for Program Files folder for “RSKeymgmt.exe” in DR Server

6.  Execute the "RSKeymgmt.exe" from command Prompt of DR Server in Admin context with the below syntax, 
RSKeymgmt.exe -a -f "C:\Test\ssrsencryptkey.snk" -p "<passwordof theencryptionkey>"

7. Restart SSRS.

Once you  have completed these steps you access the Reports web page for the DR SSRS it should match the primary server. Remember I have not configured any real time DR for the ReportServer and ReportServerTempDB databases in the above example, You have to set up Logshipping/Mirroring for the SSRS databases for complete DR Solution. The above steps merely explains how to configure in case of failover.The same set of steps can be used if you want to copy SSRS across different servers. Hope the article helps.