Sunday, May 26, 2013

[how to] Can stable (or immutable) functions call volatile functions?

[how to] Can stable (or immutable) functions call volatile functions?


Can stable (or immutable) functions call volatile functions?

Posted: 26 May 2013 05:55 PM PDT

The PostgreSQL documentation states:

Any function with side-effects must be labeled VOLATILE...

Consider the following function:

CREATE OR REPLACE FUNCTION count_items()    RETURNS integer AS  $BODY$  DECLARE    v_result INTEGER DEFAULT 0;  BEGIN    SELECT      count( t.id )    INTO      v_result    FROM      some_table t;      RETURN v_result;    EXCEPTION    WHEN OTHERS THEN      PERFORM error_log_insert( SQLSTATE, SQLERRM, current_query() );      RETURN 0;  END;  $BODY$    LANGUAGE plpgsql STABLE    COST 10;  

Since error_log_insert alters the database (performs an insert upon an exception), does this mean that the count_items function has a side-effect (albeit indirect), and thus cannot be declared STABLE, but must be VOLATILE?

In other words, does the stability or volatility of a function also depend on the functions it calls within its exception block?

If that is the case, then how would you create STABLE functions in PostgreSQL that log all exceptions to a database table?

How to use Python and XML for distributed DBMS

Posted: 26 May 2013 02:11 PM PDT

I have a project where 3 VMs (say VM1 VM2 and VM3) are given to me. Each with 4 GB ram, and 50 GB space. They have known static IPs.

All 3 VMs are installed with ubuntu 10.04 and MySQL. VM2 and VM3 will have data, and VM1 will act as the application engine, extract data from VM2 and VM3, integrate the data, and display the result. All VMs are to be accessed using another computer(having fedora), thus all my coding will have to be through the terminal.

Python and XML is to be used to make the application engine(on Apache tomcat). I have experience with SQL and MySQL, know c/c++ well, but very less exposure to Python.

What are the pre-requisites for this. Links to any tutorials or any similar codes/scripts will be helpful.

How to repair Microsoft.SqlServer.Types assembly

Posted: 26 May 2013 02:19 PM PDT

When I run a checkdb('mydb') this is the only error message printed.

Msg 8992, Level 16, State 1, Line 1  Check Catalog Msg 3857, State 1: The attribute (clr_name=NULL) is required but is missing for row (assembly_id=1) in sys.assemblies.  

It is referring to 'Microsoft.SqlServer.Types' I do see that in the this db the clr_name is blank. but under the master db there is a value in there.

I tried to drop or alter the assembly to add this value but its restricted.

btw, this db was updated lately from sql-server 2005 to 2008R2.

Unable to connect to Amazon RDS instance

Posted: 26 May 2013 01:33 PM PDT

I recently created an oracle instance on Amazon RDS. Unfortunately, I'm not able to connect to the instance using Oracle SQL Developer.

The (relevant) information I have from Amazon;

Endpoint - The DNS address of the DB Instance: xxx.yyy.eu-west-1.rds.amazonaws.com

DB Name - The definition of the term Database Name depends on the database engine in use. For the MySQL database engine, the Database Name is the name of a database hosted in your Amazon DB Instance. An Amazon DB Instance can host multiple databases. Databases hosted by the same DB Instance must have a unique name within that instance. For the Oracle database engine, Database Name is used to set the value of ORACLE_SID, which must be supplied when connecting to the Oracle RDS instance: ZZZ

Master Username - Name of master user for your DB Instance: org

Port - Port number on which the database accepts connections: 1521

From this information, the connection settings in SQL Developer are pretty obvious, so I don't really see what I could be missing...

how to change a user password while checking the current password in oracle?

Posted: 26 May 2013 05:45 PM PDT

In my application i have a change password form which users enter their current password and the new password. then i change their password using the following command:

alter user user_name identified by new_password;  

My problem is that i dont know how to check whether the current password which the user has been entered in the form is correct or not ( as the passwords are encrypted in oracle)

How should i do this?

Thx in advance.

Table optimize command crashed MySQL server

Posted: 26 May 2013 06:01 PM PDT

I have a large 5GB table on one of my MySQL databases that I wanted to optimize.

I went into the console and ran the optimize command and since it's an InnoDB database it cloned the table to a temporary one (is that accurate?).

Anyways, I did not notice that I don't have enough space on my HD to complete this action so it failed in the middle of it:

+------------------+----------+----------+-------------------------------------------------------------------+  | Table            | Op       | Msg_type | Msg_text                                                          |  +------------------+----------+----------+-------------------------------------------------------------------+  | support.exported | optimize | note     | Table does not support optimize, doing recreate + analyze instead |  | support.exported | optimize | error    | The table '#sql-420a_65fb' is full                                |  | support.exported | optimize | status   | Operation failed                                                  |  +------------------+----------+----------+-------------------------------------------------------------------+  

Now, my HD is 100% full since the temporary table that was created ate up all the storage I had left and I don't have any files I can delete from other locations on the HD to free the space.

I tried to restart the mysql daemon since I thought that might initiate a drop to the temporary table that was created and that way it will free some space, but what now happens is that I cannot start the server:

MySQL Daemon failed to start.  Starting mysqld:                                           [FAILED]  

Is there a way to drop/delete the temporary table that was created during the optimize process without having the mysql server running?

exported .SQL file won't import completely in mySQL

Posted: 26 May 2013 01:47 PM PDT

I am running MySQL on an Apple server OS X 10.6 (and am a complete newbie to SQL). I am trying to import a .SQL export file that was generated by an application called MailSteward Pro. The .SQL file is an email database about 88 Gigs in size. When I do the import operation in Sequel Pro, I see a progress bar that goes almost all the way to the end (takes hours, shows no problems), but then suddenly shows a message that says

[ERROR in query 650009] You have an error in your SQL syntax: check the manual that corresponds to your MySQL server version for the right syntax to use near "\d\">+B$?\"+vq+q6]\Vss<:t/qs4>{]:>' at line 1

what in the world does this mean and how do I figure out why I can't import my saved .SQL file? It goes most of the way to the end and shows only 260,000 emails instead of 290,000. What's my next step - how do I debug this and import as much as possible of this database? Maybe check it for errors somehow, or have it ignore the bad record and move past it to import the rest?

Communication between two database servers

Posted: 26 May 2013 07:30 AM PDT

I have 2 databases running on different virtual servers. I have some database processes running on one node. These processes connect to Database Gateway (PG4MQ) and process the messages.

I need to start the process manually. I need to write a script that will monitor the node on which the processes are running and if the node is down or not running then these process will automatically start on the other node.

Can someone tell me is this feasible or not, if feasible then how can this be achieved?

Databases installed on RHEL 5.8. OEM, Grid, DBCA and Oracle Net services are not available.

Is it possible to change the MySQL server-id during a SQL session?

Posted: 26 May 2013 05:15 AM PDT

I have a MySQL 5.5 server with a server-id of 20. I want to execute a few queries on that server, but have them write to the binlog as a different server-id than 20. Is this possible?

I am aware set sql_log_bin is commonly used to turn binlogging on/off at-will, but I actually do want to execute these queries and have them written to the binlog, just as another server-id.

set server_id=21 errors saying it needs to be global, not a session variable, so I may already have my answer, but still, I have a feeling it is possible somehow. I mean, the mysql replication slave component can do it.

Will Partitions and Indexes on the same table help in performace of Inserts and Selects?

Posted: 26 May 2013 02:28 PM PDT

I have a table containing the list of visitors and this table has the following information.

  • Visitor Browser Information
  • Visitor Location Information
  • Visitor Time Information
  • No of Visits

I have a second table that maintains the history of each visits, which means I if the same visitor visits the site, I insert into the second table and update the no. of visits on the first table.

The kind of reports that I have to generate for this table are

  1. Count of Visitors/day or days (Search Between days)
  2. Count of Visitors/month
  3. Count of Visitors/year
  4. Count of Visitors/browser or grouped by browsers

On an average there are about 20000 inserts to the second table and about 15000 inserts to the first table, meaning 5000 were updates to the first table (5000 repeat visits).

I need to decide between partitioning the tables by month and sub-partitioning by days for the reports 1,2,3 and index the browser related columns for report 4.

There will be more reports in the future not sure on what clauses.

Does partitioning/sub-partitioning along with indexing help in the performance of inserts and selects?

Should I perform partitioning on both the tables?

I am currently using MySQL 5.5 + InnoDB

Join on different types

Posted: 26 May 2013 02:32 PM PDT

In a database I have two tables:

  1. The first has a field named taxonomy_id that is an integer
  2. The latter has a field named ID that is a character varying

The two tables are related: if it exists a row in the first one with taxonomy_id = N it will exist a row in the second one with ID = N.toString.

Now, I would like to do the join between this two tables; naturally the "normal" join doesn't work because of the type mismatch.

Can you help me solving this?

I'm using PostgreSQL.

"Arithmetic overflow" when initializing SQL Server 2012 replication from backup

Posted: 26 May 2013 02:56 PM PDT

I'm initializing SQL Server replication from a backup, by following instructions from here:

http://www.mssqltips.com/sqlservertip/2386/initialize-sql-server-replication-using-a-database-backup/

...but, when I execute

USE MyDatabase  GO  EXEC sp_addsubscription   @publication = MyDatabasePublication,   @subscriber = 'AMAZONA-XXXXXXX',   @destination_db = MyDatabase,  @sync_type = 'initialize with backup',  @backupdevicetype ='disk',  @backupdevicename = 'D:\Temp\MyDatabasepublication.bak'  

I get the following error:

Msg 8115, Level 16, State 2, Procedure sp_MSsetupnosyncsubscriptionwithlsn, Line 237  Arithmetic overflow error converting expression to data type nvarchar.  

Any idea why, or at least where can I find this stored procedure to troubleshoot further?

SQL Server 2012, Standard Edition.

UPDATE: It looks like that the problem is caused by the fact that database was created using SQL Server 2008R2 and then attached here. Anyway, still need a solution for it.

limit the number of rows returned when a condition is met?

Posted: 26 May 2013 01:11 PM PDT

Is it possible to limit the number of rows returned when a condition is met? I am working on a query to check if a student is ready to graduate, and they need to meet a certain number of credits per subject. I don't want all classes, because any class past the number of credits needed can be used for electives.

EDIT: I forgot to mention that this is SQL 2008 R2

I was hoping to be able to do something like this (which I know doesn't work)

select top(sum(credits) > 3.0) c.name, c.subject, c.credits, c.pass  from classes as c  where c.Subject = 'Math' and c.passed = 1  

Any help would be great


Data

Subject        |  Name             | Grade | Credit | Pass  Social Studies | DL Psychology     | 83    | 0.50   | 1  Social Studies | Global Studies 10 | 82    | 1.00   | 1  Social Studies | Global Studies 9  | 83    | 1.00   | 1  Social Studies | Part. In Govern   | 84    | 0.50   | 1  Social Studies | US History 11     | 87    | 1.00   | 1  

Query

select c.Subject,              c.Name,              c.credits,              c.pass,              c.score      from @classes as c       where (c.Subject = 'Social Studies' and c.pass = 1 and c.Name like '%Econ%')      or    (c.Subject = 'Social Studies' and c.pass = 1 and c.Name like '%Gov%')      or    (c.Subject = 'Social Studies' and c.pass = 1)      group by c.Subject, c.Name, c.credits, c.pass, c.score      having Sum(credits) <= 2.0  

Im exprecting to see these rows returned

Expected Results

Subject        | Name              | Grade | Credit | Pass  Social Studies | Part. In Govern   | 84    | 0.50   | 1  Social Studies | DL Psychology     | 83    | 0.50   | 1  Social Studies | Global Studies 10 | 82    | 1.00   | 1  

tempdb logs on same drive as tempdb data or with other logs?

Posted: 26 May 2013 05:11 AM PDT

For many reasons I only have 3 hard drives (RAIDed and in an Always-On AG) for all my database files:

  • D: Data
  • E: Logs
  • F: Tempdb

Should the tempdb log file go on F: with the data file(s) or on E:?

My tempdb data file has the highest stalls by far, with the log file 4th out of 24.

In my limited DBA experience (I'm a developer) I would lean to putting the tempdb.ldf on E: as the writes will all be sequential.

Efficient way to perform approximated search?

Posted: 26 May 2013 08:11 AM PDT

I have to perform a join between table_a and table_b, using three fields as key being one of them the date of the event, say, both tables have id1, id2 and evnt_time for eache record.

As it happens though, the evnt_time can be displaced in a few seconds between these tables. Thus the join has to behave sort of table_a.id1 = table_b.id1 and table_a.id2 = table_b.id2 and table_a.evnt_time ~ table_b.evnt_time, where:

  • ~ means approximately, given + or - seconds
  • There must be a way to give ~ parameters of how precise should be. e.g.: table_a.evnt_time ~(2) table_b.evnt_time will join table_a.evnt_time with table_b.evnt_time - 2s, table_b.evnt_time - 1s, table_b.evnt_time, table_b.evnt_time + 1s and table_b.evnt_time + 2s.

To tackle the situation, there are a few possibilities I am experimenting, but my doubt is on which architecture should I use to perform a very efficient "approximated join" - these tables are partitioned, sub-partitioned and each sub part may contain billions of records... (although I have a reasonable amount of resources).

For once, I thought of storing the unique sencond of the event (i.e. second it happened since julian calendar) on both sides so the real join (give a "~(2)") could simply look like:

select *  from      table_a,      table_b  where      table_a.id1 = table_b.id1      and table_a.id2 = table_b.id2      and (table_a.evnt_sec = table_b.evnt_sec           or table_a.evnt_sec = table_b.evnt_sec + 1          or table_a.evnt_sec = table_b.evnt_sec + 2          or table_a.evnt_sec = table_b.evnt_sec - 1          or table_a.evnt_sec = table_b.evnt_sec - 2)  

But I am unsure of how efficiently would that perform with the indexes and scans..

This is just an example, I am not stuck to any sort of architecture. Also, I am using Oracle 11gR2.

MySQL: Error reading communication packets

Posted: 26 May 2013 06:11 AM PDT

I get this warning in mysql,

[Warning] Aborted connection 21 to db: 'MyDB' user: 'MyUser' host: 'localhost' (Got an error reading communication packets)  

I have been through few topics in google and according to some suggestion I increased the max_allowed_packet from 128 to 512 to 1024 still the same behaviour.

I am using Drupal 7, and yes there are lots of blob data types, but 1024 Mb of max_allowed_packet should be enough in my opinion.

Any other workaround how to overcome this warning ?

EDIT:

Added some settings as @Rolando's suggestions/answer, I still get the same warning.

My mysql config looks like this:

[client]  port        = 3306  socket      = /tmp/mysql.sock  default-character-set = utf8    [mysqld]  port        = 3306  socket      = /tmp/mysql.sock  skip-external-locking  key_buffer_size = 16K   max_allowed_packet = 1024M   table_open_cache = 128   sort_buffer_size = 64K  read_buffer_size = 256K  read_rnd_buffer_size = 256K  net_buffer_length = 2K  thread_stack = 192K  # Query cache disabled  thread_cache_size = 8  myisam-recover = BACKUP  max_connections = 100  thread_concurrency = 10  tmp_table_size = 128M  max_heap_table_size = 128M  log_error                = /var/log/mysql/mysql-error.log  log_slow_queries        = /var/log/mysql/mysql-slow.log  long_query_time = 2    log_warnings = 2    server-id   = 1  binlog-format = row  replicate-same-server-id = 0  auto-increment-increment = 2  auto-increment-offset = 1  log_bin = mysql-bin  log-slave-updates  relay-log=mysqld-relay-bin  expire_logs_days        = 10  max_binlog_size         = 100M    innodb_data_home_dir = /var/db/mysql  innodb_data_file_path = ibdata1:10M:autoextend  innodb_log_group_home_dir = /var/db/mysql  innodb_buffer_pool_size = 8G  character-set-server = utf8  #innodb_additional_mem_pool_size = 2M  innodb_log_file_size = 2047M  innodb_log_buffer_size = 32M  innodb_flush_log_at_trx_commit = 2  innodb_thread_concurrency = 8  innodb_lock_wait_timeout = 50  innodb_flush_method = O_DIRECT    [mysqldump]  quick  quote-names  max_allowed_packet = 16M  default-character-set = utf8    [mysql]  default-character-set = utf8    [myisamchk]  key_buffer_size = 32M  sort_buffer_size = 32M    [mysqlhotcopy]  interactive-timeout    [mysqld_save]  syslog  

My application uses only InnoDB, but there are few database like mysql, which came with the standard mysql installations are only the ones which uses MyISAM engine type, I guess that should not be my concern though.

As you can see I have replication too, the warning is the same one in replicated server too, whose config is identical as this.

SQL Server 2008 R2 SP1 - Tempdb "Available free space" showing negative value

Posted: 26 May 2013 06:40 PM PDT

Please help as this issue is driving me nuts...

Tempdb has four data files & one log file. They are all located on the same dedicated drive.

When I right click on tempdb in SQL Server Management Studio (SSMS) to view Properties --> General...I'm seeing "Size" as ~65000 MB with "Space Available" as ~64000 MB. However, when I try to shrink the files via the GUI, I see that the "currently allocated space" hasn't changed & is still showing the initial size.

This is also where the "Available free space" is shown as a negative number. The only work around I have found so far is to restart SQL Server to flush out tempdb, but I can't keep on doing this.

Has anyone ever come across this & know the root cause/permanent solution for it please?

Thanks in advance!

MySQL subqueries that use range based on values of main queries don't use indices properly

Posted: 26 May 2013 07:11 PM PDT

I think I've isolated a problem that has been affecting many of my queries lately. And would like some help to figure out a solution for this.

Ok so my findings are that a normal query that runs very fast using like a couple of rows can actually use indexes improperly when used in a subquery which is based on values from the main query.

Lets take an example:

DROP TEMPORARY TABLE IF EXISTS Person;  DROP TEMPORARY TABLE IF EXISTS CofeeBreaks;  CREATE TEMPORARY TABLE IF NOT EXISTS Person  (      `person_id` INT(11) AUTO_INCREMENT,      `age` INT,      `lastCofee` DATETIME,      KEY `idkey` (`person_id`) USING BTREE,      KEY `datekey` (`lastCofee`) USING BTREE  ) ENGINE = MEMORY;  CREATE TEMPORARY TABLE IF NOT EXISTS CofeeBreaks  (      `id` INT(11) AUTO_INCREMENT,      `cofeeBreakStart` DATETIME,      `cofeeBreakEnd` DATETIME,      KEY `brekIdKey`(`id`) USING BTREE  ) ENGINE = MEMORY;  INSERT INTO Person (age, lastCofee) VALUES (24, '2013-03-27 14:45:34');  INSERT INTO Person (age, lastCofee) VALUES (34, '2013-03-27 14:46:38');  INSERT INTO Person (age, lastCofee) VALUES (26, '2013-03-27 15:25:24');  INSERT INTO Person (age, lastCofee) VALUES (28, '2013-03-27 16:33:54');  INSERT INTO Person (age, lastCofee) VALUES (46, '2013-03-27 17:11:03');  INSERT INTO CofeeBreaks (cofeeBreakStart, cofeeBreakEnd) VALUES ('2013-03-27 15:11:03', '2013-03-27 17:25:24');  INSERT INTO CofeeBreaks (cofeeBreakStart, cofeeBreakEnd) VALUES ('2013-03-27 14:45:34', '2013-03-27 15:25:24');  INSERT INTO CofeeBreaks (cofeeBreakStart, cofeeBreakEnd) VALUES ('2013-03-27 17:11:03', '2013-03-27 17:11:03');  SELECT * FROM Person WHERE lastCofee BETWEEN '2013-03-27 15:11:03' AND '2013-03-27 17:11:03';  SELECT      *,      (SELECT AVG(Person.age) FROM Person WHERE Person.lastCofee BETWEEN CofeeBreaks.cofeeBreakStart AND CofeeBreaks.cofeeBreakEnd) AS averageAge  FROM CofeeBreaks;  

So the explain results for the first select are as follow:

1   SIMPLE  Person  range   datekey datekey 9       1   Using where  

But the second query doesn't use the index properly in the subquery and analyses more rows than necessary:

id  select_type         table   type    possible_keys   key key_len ref rows  1   PRIMARY CofeeBreaks ALL                                               3  2   DEPENDENT SUBQUERY  Person  ALL       datekey                         5  

As we can see the subquery needs to analyse all rows in the person table when none of the cofeebreaks ranges surrounds all of the 5 persons.

The way I've been fixing the performance issues in a very busy database is by doing a select into a temporary table and than looping the rows of that table and updating it with the aggregates that i need. This is obviously ugly code and shouldn't be built that way. But I really haven't found a way to optimize queries for this kind of subqueries and I've tried a lot of possible ways to do this without success on the optimization.

Thanks in advance for any inputs.

SSRS Bar Chart Issue

Posted: 26 May 2013 09:11 AM PDT

I have a Bar Chart that I am trying to limit Sales data to specific Months. I want a SUM of Sales $ for 2012 and a SUM of Sales $ for 2013. The Month Names are formatted like January 2012, January 2013, etc. How can I do this?

SSIS Package Data Flow Task RollBack on Error

Posted: 26 May 2013 09:12 PM PDT

I tried running a package and it failed but I noticed in the output box (in visual studio) that it managed to write 2000+ rows to a table. I was wondering if an SSIS package would roll back the transactions in my Data Flow Task if the overall package failed. I found this thread

http://www.sqlservercentral.com/Forums/Topic998096-391-1.aspx#bm998104

which states that in order for the package to rollback DFT changes:

Set the TransactionOption property of the package to "Required" Set the TransactionOption property of the dft to "Supported"

I had both the Package and DFT Transaction Option property set to supported so I am guessing that the rows written to the table were not rolled back... Do you all agree that this is the best way to make sure table changes are rolled back in the event the package fails? Will this also rollback file System Tasks such as file moves on the local file system?

Query to find Maximum friends not giving correct resuts

Posted: 26 May 2013 04:53 AM PDT

Table Schema for the Likes table:

id id1  1  2  1  3  2  1  

I count the total number of Likes for each student with this:

select id,count(*)  from friends  group by id  

I have to find the students with the maximum number of Likes, so I use this query as a temp table:

select * from (    select f1.id,count(*) as count1    from Likes f1    group by id  ) temp  where not exists (    select f2.id,count(asterick) as count2    from Likes f2 group by f2.id having count2 > temp.count1  )  

It returns all of the records. Can anyone point out what I am doing wrong in this query?

How to remove column output in a for xml path query with a group by expression?

Posted: 26 May 2013 10:11 AM PDT

I forgot how to remove a column from being output in a FOR XML PATH query using a group by expression. I used it before but somehow I lost the article. In the below example. I do not wish to have idForSomething output in my result by I want to use it as condition for my inner query.

SELECT     idForSomething,      SUM(allSomething) AS [@sum],     (SELECT           innerSomething AS [@inner], innerSomething2 AS [@inner2]      FROM             someTable s2      WHERE            s2.innerSomething = s1.idForSomething      FOR XML PATH('innerlist'), TYPE)  FROM          someTable s1  WHERE         idForSomething = 1  GROUP BY       idForSomething  FOR XML PATH('listofsomethings')  

Added XML Body:

    <listofsomethings @sum="10">          <innerlist @inner="..." @inner2="..." />          <innerlist @inner="..." @inner2="..." />          <innerlist @inner="..." @inner2="..." />      </listofsomethings>  

I will look around again online, but I asking for the syntax to SQL Server to NOT USE "idForSomething" column in the final output. I thought it was something like NOOUTPUT but I can't remember and it does not work.

DB_ID context from farther up call stack

Posted: 26 May 2013 07:58 AM PDT

In SQL Server, is it possible to get the DB_ID from the context from farther up the call stack?

My goal is to create some handy (and admittedly hacky) utility functions in a dev sandbox database that make it easy and concise to get the fully qualified names of objects given their short or fragmented names, and additionally to delete objects using the same short name. These utility functions would be in a single utility database but called from other databases on the same server.

From what I can see from testing:

  • ORIGINAL_DB_NAME() as intended returns whatever was in the connection string, not the current context (set by USE [dbname]).
  • When called in a function DB_NAME() returns the name of the database where that function is defined. Another way of saying this is that the context inside a function or stored procedure is that of the database in which it is defined

I know the engine keeps track of each of the database contexts up and down the call stack (see below for proof). So is there any way to access this information?

I want to be able to find and operate on objects in the context of the caller's database, even though the executing code is not in the same database. For example:

use SomeDB  EXEC util.dbo.frobulate_table 'my_table'  

I know I can just do

EXEC util.dbo.frobulate_table 'SomeDB.dbo.my_table'  

But I'm just really curious if it is possible to query the call stack in this way.

Update / note

I did read and download the code from Gabriel McAdams' blog. This provides a record of the calling procedure ID up and down the stack but still assumes everything is in the same database.

Proof the SQL Server remembers DB context up and down call stack

Example: On a dev server with databases TestDB1 and TestDB2

use TestDB1  GO  CREATE FUNCTION dbo.ECHO_DB_NAME() RETURNS nvarchar(128) BEGIN RETURN DB_NAME() END  GO    use TestDB2  GO  CREATE PROCEDURE dbo.ECHO_STACK AS   BEGIN      DECLARE @name nvarchar(128)      SET @name = DB_NAME()      PRINT 'Before, DB_NAME inside dbo.ECHO_STACK : ' + @name      SET @name = TestDB1.dbo.ECHO_DB_NAME()              PRINT 'TestDB1.dbo.ECHO_DB_NAME returned     : ' + @name      SET @name = DB_NAME()      PRINT 'After, DB_NAME inside dbo.ECHO_STACK  : ' + @name  END  GO    use master  SELECT DB_NAME()  -- Returns 'master'  EXEC TestDB2.dbo.ECHO_STACK   

The ECHO_STACK proc prints:

Before, DB_NAME inside dbo.ECHO_STACK : TestDB2  TestDB1.dbo.ECHO_DB_NAME returned     : TestDB1  After, DB_NAME inside dbo.ECHO_STACK  : TestDB2  

SQL Server Concise List Of Trace Flags

Posted: 26 May 2013 05:38 AM PDT

I did my 5 minute research and have not come up with a concise list, or even a list really except for Trace Flags (2012) which is a bit lacking.

Looking at this query which uses 8691, I was wondering if such a list exists.

with cte(guid,other) as (   select newid(),1 union all   select newid(),2 union all   select newid(),3)   select a.guid, a.other, b.guid guidb, b.other otherb   from cte a   cross join cte b   order by a.other, b.other   OPTION (QUERYTRACEON 8691);  

No comments:

Post a Comment

Search This Blog