NOLOCK

I recently discovered and tried using the “NOLOCK” query hint and found in some cases it reduced my query times up to 50%, though at times it seemed to have no effect or minimal effect. I got the same number of results with each query as I had gotten in the past.

To use “NOLOCK” in a query, you put it after the table name you are querying.

Examples:

INNER JOIN Table1 (nolock)

FROM Table1 (nolock)

Basically it is a dirty read of the data – it overrides all locks currently being held and gives you a result set of what the data currently is in the database. This sounds equivalent to the “SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED” statement, except it seems to actually speed up the queries whereas I don’t see increased speed of queries with the “set transaction isolation level read uncommitted” statement.

This is new to me and I’m still experimenting with it, but if there are issues with records being locked by queries, putting (nolock) beside each joined table may leap over those locks and get the data back more quickly.

NOTE: I work in a database that is supposed to have static data outside of a nightly update. The NOLOCK query hint may not be a good choice if you are running reports inside a live system. The database I have access to is one in which I do not have the right to update, delete, or create tables, but still it appears there are locking issues based on how much the NOLOCK query hint sped up my queries.

Below is a sample query:

SELECT DISTINCT
        Meds.Med_ID as Med_ID,
        Meds.Desc as Med,
        Meds.Generic as Generic,
        Generic_Name.Desc AS Generic,
        Therapeutic.Desc AS Thera,
        Pharma.Desc AS Pharma,
        Meds.Strength As Med_Strength,
        Meds.Form as Med_Form,
        Meds.[Route] as Med_Route
INTO ##BPatchList
FROM Meds (nolock)
        LEFT JOIN Pharma (nolock)
        ON Meds.PHARM_ID = Pharma.PHARM_ID
        LEFT JOIN Generic_Name (nolock)
        ON Meds.GENERIC_ID = Generic_Name.GENERIC_ID
        LEFT JOIN Therapeutic (nolock)
        ON Meds.Thera_ID = Therapeutic.Thera_ID
        AND Meds.Med_id in (/*Buprenorphine patch*/ 1234, 3456, 2345, 5678, 4682, 1236, 5656, 6666, 1654, 5464)

Advertisements

SQL Server: Finding Dependent Objects

To find objects dependent on a given table, you can use the following:

execute sp_depends tablename

In the database I work with, the table I used brings up many views and a few stored procedures.

I ran into this on kudvenkat’s youtube video, “Dynamic SQL vs Stored Procedure.”

He was using it to see what objects were dependent on the table before deleting it.

I work with a database whose tables I don’t want to delete, but where it may be at times useful to know what views are dependent on a table just in order to know that a certain view exists in the first place. Thankfully with RedGate’s SQL Search app (which is a free tool, and which I use daily), I can see how a view is constructed, including which tables are used.

Getting to your ##GlobalTempTables from the Remote Server

If my understanding is correct, everybody who has access to a remote server has the ability to make temp tables on that remote server. However, this does not necessarily endow a person with the rights necessary to export that data.

This has been the case where I work for a while. Most of the time, the queries I run are small enough in results to allow me to simply copy and paste the data where needed. For larger files, I have struggled a bit more. My first work-around was to link the remote server to my local server, and run my queries from the local server, depositing the files on the local server rather than the remote server’s temp folder. This did produce correct results, but running the queries this way appeared to take quite a bit of time.

I wanted a way to transfer global temp tables from the remote server to the local one, but when I attempted to do this using a Select Into query, SQL Server always interpreted the “##” as pointing to the local temp tables rather than the remote ones. Such queries would result in the following message:

“Database ‘TempDB’ ignored, referencing object in tempdb.”

Well, today I found the solution over at StackOverflow. User Ravi provided the answer in 2014, and the answer was edited by scsimon on June 2, 2016:

The query this person provided was:

SELECT * FROM OPENQUERY(linkedServerName, ‘SELECT * FROM ##temp’)

So – I tried it out. In my case, I wanted to take the ##OpioidList file off the remote server, and I wanted to make it into a table in my local database (not the tempdb.)

First, my remote server was already linked through my localDB- we’ll call it “secretservername” here. If you don’t know whether your remote server is linked to your local one, you can check in Server Objects –> Linked Servers.

Next, I made a query on my local server:

USE LocalServer
GO

SELECT * INTO OpioidList
FROM OPENQUERY(secretservername, ‘SELECT * FROM ##OpioidList’)

This worked like a dream – creating the table in my “LocalServer” database as a non-temporary file. From there I was able to export it to a flat file to provide to someone in a different department.

 

 

Creating a pre-filled field during a Select_Into Statement

In preparation to performing a Union query between a table of procedures using CPT codes and a table of procedures using ICD-9 codes, I want my field names to match, but for a field called “PROC_CODE” to identify which code is which type once the two tables are combined.

I did a little googling and the answers I found seemed to be overly complicated for what I needed: Basically making a blank column, and then updating it. Alternatively, creating the table without that column and then adding it after the fact.

So I tried something simple and intuitive (and probably well-documented if not for my weak google-fu) – and it appears to work. Without further ado, this is my query that makes “CPT” the default string inside the “PROC_CODE_TYPE” field:

IF OBJECT_ID(‘tempdb..##EncListCPTMatches’) IS NOT NULL DROP TABLE ##EncListCPTMatches;

Select Distinct ##EncList.PAT_ID,
                             ##EncList.PAT_MRN_ID,
                             ##EncList.PAT_ENC_CSN_ID,
                             ##EncList.SURG_DT,
                              ‘CPT’ AS PROC_CODE_TYPE,
                              Proc_CPT.CPT_CODE AS PROC_CODE,
                              PROC.PROC_NAME AS PROC_NAME
INTO ##EncListCPTMatches
FROM ##EncList
INNER JOIN PAT_SURG
ON ##EncList.PAT_ID = PAT_SURG.PAT_ID
AND PAT_SURG.SURG_DATE = ##Enclist.SURG_DT
INNER JOIN PROC 
ON PAT_SURG.PROC_ID = PROC.PROC_ID
INNER JOIN Proc_CPT
ON PROC.PROC_ID = Proc_CPT.PROC_ID
AND Proc_CPT.CPT_CODE IN (/*Approved CPTs*/‘32220’, ‘32662’, ‘32674’, ‘33020’)
ORDER BY ##EncList.PAT_MRN_ID, ##EncList.SURG_DT

Deleting Rows from one table based on another table

In this example, I needed to delete the rows with null values in the ##EncList table where the CaseNo matched the CaseNo in the ##EncListWithProcStartEnd table.

DELETE EL
FROM ##EncList EL
INNER JOIN ##EncListWithProcStartEnd
ON ##EncList.CaseNo = ##EncListWithProcStartEnd.CaseNo
WHERE ##EncList.Start IS NULL
AND ##EncList.End IS NULL

I made the nickname for the ##EncList in the “DELETE” phrase, because as pointed out on StackOverflow, if one accidentally had “DELETE ##EncList” highlighted while running this query, you would delete your whole table.

Error: “Arithmetic overflow error converting numeric to data type numeric.”

Today I learned that if you get this error, it is probably because you did something weird with your precision and scale parameters while defining a numeric datatype.

EG – the following portion of a select statement was used to determine patient age at service:

CAST((DATEDIFF(d, PT.DOB, CASE.DOS)/365.25) AS numeric(3,1)) AS PtAge

I received the following error and the query was terminated:

“Arithmetic overflow error converting numeric to data type numeric.”

This happened because I was imagining all patients were under 100 years old in the population I was querying.

The “numeric(3,1)” portion of the clause means that the number of numeric characters allowed (precision) of 3 conflicted with the number of characters after the decimal point (scale) of 1 for patients who were over 99 years old.

The corrected portion of this query is to increase the precision to 4, so that patients with ages over 100 would not be excluded.

CAST((DATEDIFF(d, PT.DOB, CASE.DOS)/365.25) AS numeric(4,1)) AS PtAge

This allowed for ages like “101.2” because there were 4 total characters, one of which was after the decimal.

Struggles with Scientific Notation

I inserted an id that was made of a long string of numbers that imported as a “float” datatype into a varchar field when I added it to an existing table. Somehow when I did this, the number changed from a string of numbers to scientific notation.

EG. 10018507001 turned into 1.00185e+010 in a varchar field.

I haven’t resolved this, but I wanted to make note that it happened.

Instead, as deadlines are involved, I found a way to retroactively patch this data back into the field. I actually used Access to do this – importing the table, creating a text field, updating it with the float numeric data from the original file, and then re-importing the table to run the remaining queries I needed to run in SQL.

 

Datatypes and Decimals

Every once in a while I have a numeric field that needs to have the digit or two after the decimal point, even if that number is zero. Last time, this was when I was documenting a software version in a numeric field, and today I ran into the same issue with an A1C value that I took from a “float” field and inserted into a varchar field. This created duplicate values which did not match. (8.0 from a competing source, 8 from this source.)

I’m pretty sure there’s a bunch of fancy stuff I don’t know yet with datatypes and how to format them, but for this one issue, I was able to look back into my email and find Joe Sweeney‘s very helpful response to my question last time:

“A numeric type is defined by numeric(p,s), where p is precision (how many digits total) and s is scale (how many digits after the decimal).”

I have read about scale and precision before, but it was so nice to have it stated so simply.

With this knowledge, I am attempting my very first column. Here goes:

ALTER TABLE DMProject.dbo.AllA1CRegLabResults
ALTER COLUMN HBA1C_LAST numeric(4,1);

Phew! It worked. My data didn’t disappear, and the whole numbers now have a “.0” after them. Hurray!

MVA DML Statements – Adventureworks Queries

Time for me to follow along – perhaps you’d like to as well. I’m working in SQL Server 2014 Express today. I have 2016 Express installed on my personal computer as well. However, I’m using Adventureworks 2014 for both, so my results should be consistent on either computer.

Here is the first Adventureworks-compatible query from MVA’s DML Statements lecture:

Select Specific Columns and Use a Where Clause
Select Specific Columns and Use a Where Clause

(I do add the semicolon at the end of my statements – which the lecturers do not do.)

As you can see, this query returned 50 rows (bottom right-hand corner), and it includes the columns from the SELECT clause.

20160709DMLStatementsSample220160709DMLStatementsSample3

More to come 🙂

 

SQL Server 2016 Developer Edition Free

w/ Visual Studio Dev Essentials Program

I ran into this bit of info today and thought I’d pass it along. I don’t know that I need (or will ever need)  this edition or the Visual Studio Dev program, but it seemed a bit exciting.

The article does warn that the free offering cannot be used in production environments, whereas Express can be.

I haven’t decided what I should do yet – I have never even opened up Visual Studio. But I have sent a note  to the smartest person I know on the matter – Joe Sweeney – in hopes for some guidance.

I shall report back.

Using DML Statements – Notes from MVA Database Fundamentals

https://mva.microsoft.com/en-US/training-courses/database-fundamentals-8243?l=IAdTlyJy_004984382

Using DML Statements

I’ve been trying to get through this section of the course for a while now, so I’m basically forcing myself to take notes.

 ————–

DML – Data Manipulation Language

Common DML Statements:

SELECT – retrieves data
INSERT – adds data
UPDATE – modifies data
DELETE – removes data
BULK INSERT – imports a data file

The SELECT statement basically retrieves the data you request.

The most basic one is:

SELECT * FROM tablename;

This one will retrieve all rows and columns (including duplicates) from the table, “tablename.”

If you don’t want duplicate rows, you could use the following instead:

SELECT DISTINCT * from tablename;

I bring up the SELECT DISTINCT because in my work I almost always use SELECT DISTINCT rather than just SELECT, as I only very rarely want duplicates.

To limit the number of rows returned, one thing you can do is add a WHERE clause after the “FROM” clause that winnows down the data a bit more.

I’m only 2 minutes in, but it’s time for my bed. More notes tomorrow. 🙂

CASE WHEN

I ran into this today, and it looks really helpful for times when you need to have a certain value in a field which is based on (but not identical to) a value in one of your tables.

To illustrate: say you needed to pull data on a person which included whether or not said person was retired. The has “Retired” and “Working” as possible options in the “Retirement” field of the “People” table.

The Select query may look like this:

SELECT FirstName, LastName, DOB, Retirement 
FROM People;

But what if you were required to provide a code instead of “Retired” or “Working”? Maybe “Retired” needs to come back as “01” and “Working” needs to come back as “02.”

The way I would have dealt with this on my own is to make a tiny table with the codes and the retirement status in separate columns, then joined the table to the “People” table. But there’s another way – I think maybe a better way in terms of stress on the server’s CPU. (Is it better?)

It’s the “CASE WHEN” clause, and this is what it looks like:

SELECT FirstName,
                LastName,
                DOB,
                CASE
                          WHEN Retirement = ‘Retired’ THEN ’01’
                          WHEN Retirement = ‘Working’ THEN ’02’
                 END
                         AS RetirementStatus
FROM People;

I did find a real explanation of CASE modeanalytics.com – and it looks like it might be a good resource for learning SQL in general.

 

DROP IF EXISTS (DIE) pre-SQL Server 2016

SQL Server 2016 has a much more simple way to do this, but if you need to be able to re-run queries which create tables, and where the tables may or may not still exist, it can be difficult for a newbie to know how to do this gracefully.

If you run the queries and the table(s) already exist, your query errors out.

If you preface your query with a drop table query, and the table doesn’t exist, you also get stopped dead in your tracks.

SQL Server 2016 has a simple way to do this:

DROP TABLE IF EXISTS tablename;

My understanding is that this is not a new idea in the greater SQL world – it’s just newly implemented in SQL Server.

However, if you aren’t blessed with SQL Server 2016, there’s another way:

IF OBJECT_ID(‘[databasename].[schema].[tablename]’) IS NOT NULL DROP TABLE [databasename].[schema].[tablename];

What’s more, if you’re working with temp tables:

IF OBJECT_ID(‘tempdb..#temptablename’) IS NOT NULL DROP TABLE #tablename;

This is likely clunky for you smooth, practiced, experienced folks out there. But learning this was a god(dess?)-send for a newbie like myself. 🙂

Here’s a link from someone more practiced at these things than I:

DROP IF EXISTS

UNION INTO

So, this appears to have worked:

SELECT *
INTO [COPD].[dbo].[COPDCombinedGlucoseResults]
FROM [COPD].[dbo].[COPDLabGlucoseResults]
UNION
SELECT *
FROM [COPD].[dbo].[COPDFlowsheetGlucoseResults];

The new table has 156558 rows

The LabGlucose table had 137417 distinct rows and the Flowsheet Glucose table had 72986 distinct rows, so I think that means there were 53845 matching rows.

Just for grins, and because this was my first real-live UNION query, I thought it would be interesting to do an INTERSECT query to see if the 53845 number holds water and to test my newbie understanding of these queries:

SELECT *
FROM [COPD].[dbo].[COPDLabGlucoseResults]
INTERSECT
SELECT *
FROM [COPD].[dbo].[COPDFlowsheetGlucoseResults];

Lo and Behold: 53845 rows. So if you add the results of a UNION query to the results of an INTERSECT query, you will get the same number of rows as both tables separately.

I read today about the UNION ALL query, and it appeared to be the version of the UNION query that included duplicate rows.

By this logic, a UNION query + an INTERSECT query = a UNION ALL query. I must test this.

The number of rows from each table add up to: 210403

SELECT *
FROM [COPD].[dbo].[COPDLabGlucoseResults]
UNION ALL
SELECT *
FROM [COPD].[dbo].[COPDFlowsheetGlucoseResults];

and we have a winner! 210403 rows.

I think I might possibly be falling in love with SQL. I’ve almost forgotten my garden.

Import a table from Microsoft Access into SQL Server

Preliminary: Save your Access Database in the 2002-2003 file format. That means it must be a .mdb.

These are the steps for using the wizard to import the table:

  1. Right-click on your database, and choose Tasks –> Import Data..20160621 Import Step1 Tasks
  2. Choose “Microsoft Access (Microsoft Access Database Engine)” from the Data source dropdown menu in the SQL Server Import and Export Wizard.20160621 Import Step2 ImportExportWizard1
  3. Find your database using the Browse button unless you really just like typing things in. Then hit “Next.”
    20160621 Import Step3 ImportExportWizard2
  4. In the “Destination” field, choose “SQL Server Native Client 11.0.” Choose your Server Name, and fill in your Database name, and hit “Next.”
    20160621 Import Step4 ImportExportWizard3
  5. On the next page, choose “Copy data from one or more tables or views.” Unless you’re more fancy than me and want to try that second option. Then click “Next.”
    20160621 Import Step5 ImportExportWizard4
  6. Click on the box(es) for the table(s) you want. Then click “Edit Mappings.”
    20160621 Import Step6 ImportExportWizard5
  7. The default for these values was nvarchar which is a Unicode datatype, and not compatible with the data I hope to connect it to. Changed values to varchar. Click “OK” to go back to the previous window. You can also edit SQL if you’d like.
    20160621 Import Step7 ImportExportWizard6
  8. You can preview the data by clicking “Preview”
    20160621 Import Step8 ImportExportWizard7
  9. I didn’t change anything on this page. Hit “Next.”
    20160621 Import Step9 ImportExportWizard8
  10. Hit “Next” on this page too.
    20160621 Import Step10 ImportExportWizard9
  11. If everything is kosher, the file should upload correctly.
    20160621 Import Step11 ImportExportWizard10

Moving a Temp table to local server

So, I am still attempting to get my local server and the big server to cooperate. When I try to move a temp table to the local server (where I have import/export rights), I get the following error:

/*————————
USE COPD
GO

SELECT *
INTO COPD.dbo.InpatientStay
FROM [BIGSERVER].dbo.##COPDInpatientStay;
————————*/
Database name ‘BIGSERVER’ ignored, referencing object in tempdb.
Database name ‘BIGSERVER’ ignored, referencing object in tempdb.
Msg 208, Level 16, State 0, Line 4
Invalid object name ‘##COPDInpatientStay’.

Off to google this error…

(COPD is a database on my local machine.)

Today’s T-SQL Headache

Today I came into the office in hopes of re-creating the below report using SQL Server. First I thought it might be nice to upload a table into my local DB that had the list of departments at the location requested for creating the patient list.

COPD Report Criteria

First, I couldn’t connect to the local server. It was basically the same error as I got on my windows 10 computer at home, except this was my windows 7 computer at work. Thankfully, a facebook friend’s dad was able to help me figure out the issue. It turned out I needed to turn on TCP/IP and Named Pipes in the SQL Server Configuration Manager, and then restart the SQL Server Service. Yay!

But I still can’t seem to be able to get a query to work which uses tables in my local db and tables in the big server. I read that I needed to create a linked server, so I liked the big server to my local database in the “Server Objects” but I still haven’t found how to write a query in which fields from tables in both places don’t end up with invalid object names. So today – today was mostly frustration. The possibilities are exciting and I learned how to do some neat stuff, but I didn’t get to do the things I had hoped to do.

 

T-SQL Queries: UPDATE one table with data from another table

20160618 UPDATE one table with data from another table

This is something you would never want to do with these two tables in real life, but I wanted to try out the functionality so that tomorrow I could go up to work and try it on a temp table as part of a project we are currently working on.

I also want to try this with a

WHERE Sales.SalesPerson.BusinessEntityID = Sales.SalesPersonQuotaHistory.BusinessEntityID

instead of the “FROM” clause – though if this has a chance of working, I need to update the set line to:

SET Sales.SalesPerson.Bonus = Sales.SalesPersonQuotaHistory.Bonus

I guess we shall see. 🙂