Saturday, March 9, 2013

[how to] Finding stored procedures with missing indexes

[how to] Finding stored procedures with missing indexes


Finding stored procedures with missing indexes

Posted: 09 Mar 2013 08:13 PM PST

How to scan for stored proc that need indexes to be created.

Anyone have any idea? Basically we have stored procedures that run very slowly. I'm wondering if there is some kind of automated script or app that can scan to indicate if an index needs to be created?

Simple scenario needs to go from Unnormalized Data to 3NF - Review Please

Posted: 09 Mar 2013 07:26 PM PST

Here is the simple scenario. It is an animal charity, and at the moment, to track their establishment's costs, they have a spreadsheet like so:

enter image description here

1NF:

Am I right in saying that the above is already in 1NF because according to the definition of 1NF:

First Normal Form: A relation in which the intersection of each row and column contains one and only one value.

Then, I went on to figure out what I would make my primary key. Consequently, went for the following:

IncuredCosts (ID, Name, BoughtFrom, DateBought, Cost)

2NF:

Am I also right in saying that the above is already in 2NF, because again according to the definition:

Second Normal Form: A relation that is in First Normal Form and every non-primary-key attribute is fully functionally dependent on the primary key

3NF:

Let me start by the definition here:

Third Normal Form: A relation that is in First and Second Normal Form and in which no non-primary-key attribute is transitively dependent on the primary key.

Here I am stuck, I do not know how to get it into 3NF. As it currently stands, it is possible that the Name attribute would duplicate (say they might order some more Dog food) and of course as you can see the Bought From attribute is already repeating.

I would greatly appreciate some help.

Thank You.

Saving settings for multiple accounts in SQL Server

Posted: 09 Mar 2013 07:25 PM PST

I'm using SQL 2012 to store the account settings for multiple accounts (each user account will have a set of settings). These settings could have any number of fields (has not been completely set and could grow) but each user account will have the same set of settings (different values of course).

Is it better (faster with a smaller footprint and more efficient) to:

  1. give each setting its own column and each row to a user? keep in mind that there could be hundreds of settings variables. (i.e. userid,setting1, setting2, setting3...etc)

  2. Relate each individual setting to a user and making it a name value pair table (i.e. userid,settingsName, settingValue)

table design to track significant events associated with an item

Posted: 09 Mar 2013 08:59 PM PST

An Orders table I'm dealing with at present in design phase has about 10 or more columns.

Each OrderID in the Orders table will have certain significant events or milestones. The number of Orders could go into the millions. At the moment, I'm aware of about 20 important events to track for each Order, but that number may increase or decrease.

Of these 20 or so events, only a handful will always take place, while the rest are just potential events. Also, many - but not all - of these events have at least two points which need storage:

  1. the Date the event is supposed to happen (if it is no longer just a "potential" event)
  2. the Date the event did happen

The events data will be updated in the database from external sources in the form of .xls or .csv files. For all events, it is only possible for the given event to happen once.

What would be the best design for storing this milestone event data?
Would you try to use a somewhat normalized design, where there is an Events table designed similar to this...

CREATE TABLE dbo.Events (      EventID             INT      , OrderID           INT      , EventExpectedDate DATEIME2(7)      , EventActualDate   DATETIME2(7)      , EventTypeID       INT      , EventSkip         BIT      ...      )  

Or would you try a less normalized approach and flatten all the possible events out into one very wide events table that mimics what the .xls/.csv update files will look like?

CREATE TABLE dbo.Events (      EventID             INT      , OrderID           INT      , Event_1_ExpDate   DATEIME2(7)  /* No, the actual names would not include 1, 2, 3 */      , Event_1_ActDate   DATETIME2(7) /* The actual names would name type of Event */      , Event_2_ExpDate   DATETIME2(7)      , Event_2_ActDate   DATETIME2(7)      ...      , Event_20_ExpDate   DATETIME2(7)      , Event_20_ActDate   DATETIME2(7)      )  

Or is there another pattern you would use?

I would like to query a range of criteria on multiple columns in MySQL

Posted: 09 Mar 2013 12:02 PM PST

I have 3 columns in a mysql table. I'm using Innodb engine. I want to be able to search for some values on those columns and also anything close to those values. For example :

We have 3 columns as "Fast" "Smart" "Sharp" and they are of type boolean/int I have 1000 records in the table, as an example, I list 5 here.

Fast  |  smart  |  Sharp  ------------------------  0         1          1  1         1          1  0         0          1  1         1          1  1         0          0  

and let's say we wanna search for people who are smart and sharp as 0 1 1 but if there is no record with the value of 0 1 1 we want to get the closest to it which is 1 1 1 or 1 0 1 or etc. So now when I search for it, I get the exact value, if the value doesn't exist, I won't get anything back from DB. So how can I achieve this and if I get a bunch of result, how can I sort them from the closets to my query to the feartest ?

mysql_connect() system error :111

Posted: 09 Mar 2013 10:01 AM PST

I have 2 VPS.

I would like to connect from the frist to the second's mysql db like this :

$db = mysql_connect("94.xxx.xxx.xxx:222", "user", "pass");  

I got this error:

Warning: mysql_connect(): Lost connection to MySQL server at 'reading initial communication packet', system error: 111 in /var/www/pcsart/test.php on line 4

My server is works on first server is works on port 222. In the first server I don't have a mysql. On the second is running on port 3306

First and second server's firewall got these rules :

iptables -L  Chain INPUT (policy ACCEPT)  target     prot opt source               destination  ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:mysql  ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:mysql  ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:222  ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:222  ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:mysql    Chain FORWARD (policy ACCEPT)  target     prot opt source               destination  ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:mysql  ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:222    Chain OUTPUT (policy ACCEPT)  target     prot opt source               destination  ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:mysql  ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:222  ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:mysql  

What i'm doing wrong?

Fast paginated result, for various filter clauses

Posted: 09 Mar 2013 07:39 AM PST

I have been working on obtaining a paginated result from a table in my database (11g). While I have a query that does work (ie, the result is correct) it does not perform as well as I'd like to and I am trying to improve its efficiency (est. 60 calls per second on that query only).

So first of all, I read What is an Effective Way to Count the Number of Pages in Oracle?, and the article pointed too, but unfortunately it does not discuss the execution plan of the queries presented at all (I'll get to see them come Monday).

Here is the query I had come up with (the table is partitioned on part_code which is a date range):

select <all-columns> from (      select rownum as rnum, <all-columns> from (          select /*+index (my_table index)*/ <all-columns> from my_table          where part_code in (?, ?, ?)            and date between ? and ?            and type in (?, ?, ?)            and func_col1 = ?            /*potentially: and func_col2 = ? and func_col3 = ? ... */          order by date, type, id       )  ) where rnum between M and N; /* N-M ~= 30 */  

Note: most of the queries will be carried out with a single func_xxx filter, and I think M will be small... but have no guarantee and in theory M can be up to 9999.

Note: ~72 partitions in total, but only ~39 active at most, ~300,000 different values of func_col1 (but with some having ~50,000 rows and some having 1 row), ~10 values of type, id is unique (generated through a sequence).

It does work, but there is a nasty surprise in the execution plan: the 2nd query (with rownum as rnum) is completely executed, fetching up to ~50,000 rows from the DB, before the pagination kicks in. Fetching ~50,000 rows from the DB to return some ~30 of them seems particularly inefficient.

In the execution-plan this shows up as:

View  \_ Filter (rnum)     \_ View <= here comes the trouble        \_ Sort           \_ ...  

I can create an index over the table if necessary, and I can transform the existing partitioned index (part_code, func_col1, date, type, id). The index has exactly the order required by my order by clause; but it does not seem to be sufficient (and removing the order by clause does not get me any closer anyway).

Is there a way to prevent the "materialization" of the view and have Oracle build it on the fly as the outer query needs more data (ie, moving to lazy evaluation of the inner queries) ?

Improving performance of SQL Server after migration to cluster servers [closed]

Posted: 09 Mar 2013 05:00 AM PST

Since migrating our database (SQL Server 2008) to Cluster Server, it has become very slow, and we cannot fetch data and reports from it.

What are the most important things we should be checking after such a change to get the performance back on track?

Type conversion in expression may affect "CardinalityEstimate" in query plan choice?

Posted: 09 Mar 2013 08:36 PM PST

I maintain an archive database that stores historical data in partitioned views. The partitioning column is a datetime. Each table under the view stores one month of data.

We constraint the events on each table with a check constraint on the datetime column. This allows the optimizer to limit the tables that are searched for queries that filter on the event datetime column.

The names of the check constraints were generated by SQL Server, so it's hard to know what they do by looking at their name.

I want the constraint names to have the form 'CK_TableName_Partition'.

I can generate a rename script using this query and copying data from from sql_text column. The WHERE clause matches check constraints whose names look like they were generated by SQL Server:

SELECT    checks.name AS check_name,    tabs.name AS table_name,    skemas.name AS schema_name,    cols.name AS column_name,    N'  EXECUTE sys.sp_rename    @objname = N''' + skemas.name + N'.' + checks.name + N''',    @newname = N''CK_' + tabs.name + N'_Partition'',    @objtype = ''OBJECT'';' AS sql_text  FROM sys.check_constraints AS checks  INNER JOIN sys.tables AS tabs ON    tabs.object_id = checks.parent_object_id  INNER JOIN sys.schemas AS skemas ON    skemas.schema_id = tabs.schema_id  INNER JOIN sys.columns AS cols ON    tabs.object_id = cols.object_id AND    cols.column_id = checks.parent_column_id  WHERE checks.name LIKE (    N'CK__' + SUBSTRING(tabs.name, 1, 9) +    N'__' + SUBSTRING(cols.name, 1, 5) +    N'__' + REPLACE(N'xxxxxxxx', N'x', N'[0-9A-F]') COLLATE Latin1_General_BIN2  )  ORDER BY table_name;  

The output looks like this:

check_name  table_name  schema_name column_name sql_text  CK__tbAcquisi__Acqui__5C4299A5  tbAcquisitions_201301   Archive AcquisitionDT   EXECUTE sys.sp_rename  @objname = N'Archive.CK__tbAcquisi__Acqui__5C4299A5',  @newname = N'CK_tbAcquisitions_201301_Partition',  @objtype = 'OBJECT';  CK__tbAcquisi__Acqui__76026BA8  tbAcquisitions_201302   Archive AcquisitionDT   EXECUTE sys.sp_rename  @objname = N'Archive.CK__tbAcquisi__Acqui__76026BA8',  @newname = N'CK_tbAcquisitions_201302_Partition',  @objtype = 'OBJECT';  CK__tbAcquisi__Acqui__7D6E8346  tbAcquisitions_201303   Archive AcquisitionDT   EXECUTE sys.sp_rename  @objname = N'Archive.CK__tbAcquisi__Acqui__7D6E8346',  @newname = N'CK_tbAcquisitions_201303_Partition',  @objtype = 'OBJECT';  ...  CK__tbRequest__Reque__60132A89  tbRequests_201301   Archive RequestDT   EXECUTE sys.sp_rename  @objname = N'Archive.CK__tbRequest__Reque__60132A89',  @newname = N'CK_tbRequests_201301_Partition',  @objtype = 'OBJECT';  CK__tbRequest__Reque__1392CE8F  tbRequests_201302   Archive RequestDT   EXECUTE sys.sp_rename  @objname = N'Archive.CK__tbRequest__Reque__1392CE8F',  @newname = N'CK_tbRequests_201302_Partition',  @objtype = 'OBJECT';  CK__tbRequest__Reque__1AFEE62D  tbRequests_201303   Archive RequestDT   EXECUTE sys.sp_rename  @objname = N'Archive.CK__tbRequest__Reque__1AFEE62D',  @newname = N'CK_tbRequests_201303_Partition',  @objtype = 'OBJECT';  

The result of the query seems to be correct and the server executes it quickly.

But the root node of the execution plan has a warning:

Type conversion in expression (CONVERT_IMPLICIT(nvarchar(128),[o].[name],0)) may affect "CardinalityEstimate" in query plan choice

What does that mean in this context? Is such a complex filter confusing the optimizer? Is it something I should be worried about?

RSExecRole missing

Posted: 09 Mar 2013 05:08 AM PST

I have multiple servers and 4 of them have reporting services installed with the ReportServer & ReportServerTempDB databases.

A few days ago I was setting up security for a new login and I found out that the RSExecRole role is missing in my dev server (in the reportserver database). It exists on the ReportServerTempDB, master & msdb databases.

I have found a way to create it on the master & msdb databases on msdn but it didn't help me creating it on reportserver with all the securibles & properties similar to the other environents I run.

Has anyone encountered this problem before? Can anyone help me with a script and an explanation about this role?

Implementation of stack in MySQL

Posted: 08 Mar 2013 11:15 PM PST

I had an abstract idea of implementing a stack in MySQL. Tried a lot of Google work but couldn't reach to any logical conclusion.

The Stack must be a collection of tables with the stack top pointing at a particular table. We may have various such stacks in our DB. The stack length may increase if for example the field from the table is full. Likewise the stack length may decrease in the opposite manner!

Could you please give me an implementation idea! is it really possible at first place? Hope I'm clear with my question.

Want to Get Brands In Wordpress Database Structure Saved as Custom Post types

Posted: 09 Mar 2013 12:34 PM PST

Hello out there here I am having difficulty in fetching the brands per ctegory in the sidebar of my website

Here is the description ..

I am using the feeds and feed uploader to upload feeds and create brands and product categories in wordpress

The problem is there is no link within the product_category and product_brand and I want to show specific brands in the side bar of a specific category not the whole brands list which is going to long down

So I tried out these heavy queries to fetch the brands as per category, but using the INNER JOIN made the database too slow and the website keeps loading without showing brands the query is working fine the only thing I want to know is to speed up the queries for the customers so they don't fade away

Here are the queries I am using

$term = get_term_by('slug',get_query_var('term'),'product_category');    $q5 = "SELECT DISTINCT p1.ID         FROM $wpdb->posts p1         INNER JOIN $wpdb->term_relationships tr1 ON (p1.ID = tr1.object_id)         INNER JOIN $wpdb->term_taxonomy tt1 ON (tr1.term_taxonomy_id = tt1.term_taxonomy_id)         WHERE tt1.term_id = '$term->term_id'";    $q2 = "SELECT tt2.term_id         FROM $wpdb->posts p2         INNER JOIN $wpdb->term_relationships tr2 ON (p2.ID = tr2.object_id)         INNER JOIN $wpdb->term_taxonomy tt2 ON (tr2.term_taxonomy_id = tt2.term_taxonomy_id)         WHERE p2.ID IN ($q5)";    $q3 = "SELECT DISTINCT tt3.*         FROM $wpdb->posts p3         INNER JOIN $wpdb->term_relationships tr3 ON (p3.ID = tr3.object_id)         INNER JOIN $wpdb->term_taxonomy tt3 ON (tr3.term_taxonomy_id = tt3.term_taxonomy_id)         INNER JOIN $wpdb->terms t3 ON t3.term_id = tt3.term_id         WHERE 1=1           AND tt3.term_id IN ($q2)           AND tt3.taxonomy = 'product_brand'         ORDER BY t3.name ASC";    $brands = $wpdb->get_results($q3);   

The first two queries runs fine but the last one make the datbase query too slow

so any help please

MySQL incredibly slow on WAMP and XAMPP

Posted: 09 Mar 2013 08:34 PM PST

I've installed WAMP on my new Windows 8 machine and for some reason when I use PHPMyAdmin to import an sql file (it's only about 5mb), it takes several minutes before timing out. And then when I check I can see it's only imported a fraction of the tables.

I've tried other sql files and it's the same story. These same sql files import perfectly fine (and in a few couple of seconds) on my Windows 7 machine also running WAMP.

I've since uninstalled WAMP and tried XAMPP, and the problem still exists. Even just browsing the databases with PHPMyAdmin takes a long time between page loads.

I am guessing it's a problem with MySQL. Any suggestions would be helpful.

PostgreSQL replication for archiving

Posted: 09 Mar 2013 04:09 AM PST

I've looked around a bit and haven't found a very clear strategy or method for what I'm trying to do, which is surprising because I would think it would be a fairly common practice.

I currently use Bucardo to replicate in a Master Slave setup. This works fine except I want the Slave to retain all records ever written to it. The master is a smaller system which will need to have its tables truncated periodically. I don't want these truncates to be carried over to the Slave. I would think this would be a fairly common practice but have been stumped finding a solution that will work.

Could anyone point me in the right direction? The right direction doesn't necessarily need to involve using Bucardo either.

Thanks

column precision exceeded. ORA-01438 is so unmeaningful

Posted: 09 Mar 2013 05:34 PM PST

I am not sure if this is a question to StackExchange DB Admin group. please let me know if its not.

Let's suppose you have a Java batched prepared statement writing data to database. If one of the columns has a value larger than specified precision it throws ORA-01438 exception. Is there any way to find out which column is that?

Applying user-defined fields to arbitrary entities in a SQL Server database

Posted: 09 Mar 2013 07:34 PM PST

Currently we have an old (rather crude) system that has user defined fields, which are mapped against rows in arbitrary tables. This was an after-the-fact modification based on a customer request, and it wasn't really designed to scale well. Our system has around 60 different types of entities, which makes things even more complicated. Essentially the implementation looks like this:

USER_DEFINED_FIELDS:

UDF_ID         int  ENTITY_TYPE    int  UDF_NAME       nvarchar(64)  UDF_DATA_TYPE  int  UDF_FORMAT     nvarchar(16)  UDF_MASK       nvarchar(24)  UDF_FLAGS      int  

UDF_VALUES_NUMBER:

UDF_ID         int  ENTITY_ID      int  VALUE          int  MODIFIED       datetime  

UDF_VALUES_TEXT:

UDF_ID         int  ENTITY_ID      int  VALUE          nvarchar(255)  MODIFIED       datetime  

etc...

This gets nice and fun when we generate our own ways to index compound primary keys, but that's another DailyWTF-worthy story.

Obviously this is pretty hideous, and leads to some spectacularly horrific queries being generated, but it's worked alright for now because we limit each entity to a maximum of 5 user defined fields. As a quick disclaimer, I wasn't with the company when this design decision was made! ;)

Anyway, we're about to start a shiny new project and will inevitably need a better way of doing this, with no restrictions on the number of UDFs we can apply to entities, increased performance, and less horror in the generated query department. If we could run a NoSQL solution like Mongo or Redis I'd be happy and wouldn't need to bother you all, but sadly that's not an option. Instead, I need a way to do this from within SQL Server 2008 R2.

So far, I've come up with the following options:

  • Individual UDF table for each entity type, with identical structures.
    • Benefits: Queries are faster and simpler, solves the compound index problem, can search by UDF.
    • Downsides: Table versioning is more cumbersome, lots of extra tables, difficult to index across similar entity types (e.g. purchase orders and sales orders)
  • Binary blob UDF column in every entity's table.
    • Benefits: Really fast: no fetch overhead, no extra JOINs.
    • Downsides: Can't search by UDF, extra proprietary code required.
  • XML column in every entity's table.
    • Benefits: Fast like binary blobs, seems to be indexable.
    • Downsides: No idea how they work - looks complicated!

So, do any of these ideas have merit? Is there a better alternative?

Access denied Error in MySQL and Drupal

Posted: 09 Mar 2013 12:34 AM PST

We are trying to deploy a Drupal application on our production server.

We see this strange error Lost connection to MySQL server at ''Access denied root@ipaddres password(yes) after deploying.

We made all necessary corrections in settings.php and also give GRANTs to the webserver to access the database. The webserver and the database server are physically two different machines.

We are able to connect to the database server from our web server through the command line.

There is no firewall enabled on either server.

Any update we make to the database means the site goes down and displays an ACCESS DENIED error.

Can any one guide me to resolve this issue?

Thanks

ON DELETE cascade alternate

Posted: 09 Mar 2013 01:34 PM PST

I am trying to create a database for 2 Tables like:

Table1 : tb1-id(pk) | tb2-id | other

Table2 : tb2-id(pk) | other

FOREIGN KEY Table2(tb2-id) REFERENCES Table1(tb1-id) ON DELETE CASCADE

Table pseudo-code:

CREATE TABLE Table1 (  tbl1-id        INT PRIMARY KEY  , tbl2-id      INT FOREIGN KEY REFERENCES Table2(tbl2-id) ON DELETE CASCADE  ...  )  

Table2:

CREATE TABLE Table 2(  , tbl2-id      INT PRIMARY KEY  ...  )  

There can be 2 records in Table1 that point to a single record in Table2. If we DELETE 1 of the 2 records pointing to Table2 from Table1 MySQL would also DELETE the referenced record from Table2. I want the referenced record to be deleted only when both the referencing records in Table1 are deleted.

Is there any other MySQL method/constraint that can fulfill the above requirement.

Parallel Statistics Update

Posted: 09 Mar 2013 05:55 PM PST

In SQL Server 2008 or later, is UPDATE STATISTICS WITH FULLSCAN a single threaded operation or it can use parallelism? How about update statistics with default sampling - can it use parallelism? I don't see an option specifying MAXDOP with update stats.

Stored Procedures Best Practices: Fenced or Unfenced?

Posted: 09 Mar 2013 08:12 AM PST

I believe I understand the reasons behind fenced and unfenced stored procedures.

Fenced run "outside" of the database (in our case DB2) so as to prevent possible corruption of the database engine should there be issues with things like pointers.

Unfenced runs "inside" of the database, which means that performance is better.

From what I have also researched, SQL PL is always basically unfenced, because it is SQL and therefore cannot access memory like programming languages can.

C/C++ and Java procedures can run fenced or unfenced. But since they can possibly access memory, there should be a consideration for running them fenced, unless there is a certainty on the quality of the code to not crash and it needs performance.

First of all, am I correct in my understand of the above?

Next, is it generally a best practice to start out with all stored procedures (even those defined as SQL PL) as fenced first?

Any other best practices for stored procedures, especially as related to fencing and/or security?

EDIT: Further research has shown that SQL PL procedures cannot run fenced. Since they do not contain any code that could harm the database engine such as pointers or file I/O, DB2 knows they are safe and runs them inside the engine (ie, unfenced). That being said, I am still looking for best practices regarding all other stored procedures.

How to prevent high memory, CPU and time consumption by MySQL restore?

Posted: 09 Mar 2013 04:34 AM PST

I have a local test machine with 2 GB RAM and a dual core processor.

I imported a dump on that using

mysql -uuser -psecret < script.sql  

script.sql is a 700 MB file which contains a single database with more than 100 tables.

Restore took approximately 25 minutes.

Question: Why it took so long time? I have already optimized my server settings. The dump also contains extended inserts and disables keys). Is 700 MB is too large for this kind of machine? mysqld was using 100% of one of the CPU when I checked using htop and 22% of the memory.

Now what actually I want is to convert all my tables of that database to InnoDB. So for that I executed a simple shell command:

mysql -uuser -psecret -Bse "select Concat('Alter Table ' , TABLE_SCHEMA ,'.',TABLE_NAME , ' Engine= \'InnoDB\' ;') from information_schema.tables where ENGINE='myisam' and TABLE_SCHEMA='db_name';" > Innodb.sql

so Innodb.sql contains the script for converting tables to InnoDB.

Now when I run Innodb.sql one of my table takes more than 20 minutes for conversion. It contains only 1,378,397 records.

In total it takes more than 30 minutes. In the meantime memory usage by mysqld daemon was 73%. The CPU usage was OK at this time.

How can I minimize the time consumtion? Should I go for changing MySQL server settings or anything else?

If anybody wants my my.cnf setting I will share that.

MySQL Performance Monitoring

Posted: 09 Mar 2013 02:19 PM PST

I already have system availability monitoring for all my MySQL instances in Nagios. I'm looking to get some trending monitoring going for activity. It'd be nice to see how all the information_schema stats trend over time, to help identify sources of problems etc.

I've seen Cacti exists and seems pretty popular.

What have your experiences with Cacti been (pros/cons)?

Are there any other common solutions I should investigate?

[SQL Server] import data with identity columns...

[SQL Server] import data with identity columns...


import data with identity columns...

Posted: 09 Mar 2013 06:02 AM PST

Really going crazy here!I have table1 and table2 [for e.g. purpse]. They are of exact design:id [autoincrement, primary key, int]name [varchar(10))creatorid [int] --code of the office which inserted the row?both can contain the same id value. I need to bring table1 data to table2 while keeping the data. The db was designed for use in a NT but now there are offices which are not networked and their data needs to come to the central office thru export. How do we do that?Every row got id of the office which created it. The problem is that the id column is referenced in other tables. Any insight please?

show single field across several fields in record set?

Posted: 09 Mar 2013 02:01 AM PST

I have a table where I query one of the fields. my question is, is there a way I can produce the results across 3 fields? I'd like to use a datagrid on a webpage with 3 columns, and fill the datagrid with the record set starting at the first column with the first result, then the second column with the second result, then the third column with the third result, then the first column second row gets the fourth result and so on. I'm assuming I need to populate a tempory table which has 3 columns with the data from my table? but I'm not sure how.my table would be populated like;field1item1item2item3item4item5item6item7...... and so onthe desired record set as shown in my datagrid would look likeField1 Field2 Field3item1 item2 item3item4 item5 item6item7 ....... ........

What's wrong with this SQL?

Posted: 09 Mar 2013 02:54 AM PST

VBA - I'm trying to insert a record into MS Access from Excel.... It keeps telling me that something is wrong with the INSERT INTO statement... I tried changing the 'dateReg' (date registration) variable to delimiters #, also tried with strings ", same message. dateReg is a date field in the dB, the rest fields are text. I have no problem retrieving data (Select).Any advice will be greatly appreciated... thxSub InsertRecordDB()Dim con As ConnectionDim rs As Recordset Set con = New Connection con.connectionstring = _ "Provider=Microsoft.Jet.OLEDB.4.0;" & _ "Persist Security Info=False;Data Source=" & _ "C:\ExcelApplications\Databases\Development.mdb" con.Open ' Instantiate the Recordset object, then set its properties. Set rs = New Recordset With rs .Source = "INSERT INTO tblUsers values('" & txtUN.Text & "', '" & txtPW.Text & "', '" & cboUsers.Text & "', '" & txtPicNum.Text & "', '" & dateReg & "', '" & cboFloors.Text & "', '" & cboGender.Text & "', '" & cboAdmin.Text & "')"also tried this, adding # delimiter to dateReg: .Source = "INSERT INTO tblUsers (Username, Password, Name_Lname, Image_Number, RegisteredDate, Floor_Num, Gender, Emp_Type) values('" & txtUN.Text & "', '" & txtPW.Text & "', '" & cboUsers.Text & "', '" & txtPicNum.Text & "', #" & dateReg & "#, '" & cboFloors.Text & "', '" & cboGender.Text & "', '" & cboAdmin.Text & "')" Set .ActiveConnection = con .CursorLocation = adUseClient .CursorType = adOpenStatic ' Open the recordset. .Open End With Dim RowCnt, FieldCnt As Integer RowCnt = 1 ' Use field names as headers in the first row.' For FieldCnt = 0 To rs.Fields.count - 1' Cells(RowCnt, FieldCnt + 1).Value = _' rs.Fields(FieldCnt).name' Rows(1).Font.Bold = True' Next FieldCnt ' Fill rows with records, starting at row 2. RowCnt = 2 ' While Not rs.EOF' For FieldCnt = 0 To rs.Fields.count - 1' Cells(RowCnt, FieldCnt + 1).Value = _' rs.Fields(FieldCnt).Value' Next FieldCnt' rs.MoveNext' RowCnt = RowCnt + 1' Wend con.Close Call Sheet1.UndoSplashScreen MsgBox "Record inserted successfully!"End Sub

Best way to store images

Posted: 08 Mar 2013 06:59 PM PST

Hi.I recognized if we store an image in binary type in SQL SERVER they take more space. For example a 700kb image take about 900kb space.With this condition which way is better to store images?

[MS SQL Server] Possible to use Policies to force ad-hoc backups to be copy_only?

[MS SQL Server] Possible to use Policies to force ad-hoc backups to be copy_only?


Possible to use Policies to force ad-hoc backups to be copy_only?

Posted: 08 Mar 2013 05:14 AM PST

So at my new employer, the backup schedule is weekly fulls, daily differentials, and bi-hourly transaction logs.We need to fairly often (couple times a week or so) take backups of production DBs and restore them to QA. Having read [url=http://sqlinthewild.co.za/index.php/2011/03/08/full-backups-the-log-chain-and-the-copy_only-option/]this blog posting[/url] from another topic on this here, I can see we should do copy_only so as to not break the differential sequence.I took a brief look into Management Policies, and to be honest, I'm thinking the answer to this is going to be that we can't enforce this, we'll have to make sure to do it ourselves.But it doesn't hurt to ask...:-DThanks,Jason A.

What is the Difference between 32bit replication and 64bit replication?

Posted: 08 Mar 2013 05:44 AM PST

Hi Friends,What is the Difference between 32bit replication and 64bit replication?Thanks in Advance. :-)

[SQL 2012] Always on Secondry database backup plan & Recovering.

[SQL 2012] Always on Secondry database backup plan & Recovering.


Always on Secondry database backup plan & Recovering.

Posted: 08 Mar 2013 11:14 PM PST

Hi All,Can any one suggest me one good backup plan for Always on secondaries server... ?Currently i am planning to doOne Full copy only backup--- Every 24 hour :Every 30 min Log backup.I am looking for the solution for recovering the database using this backup strategy.

How to dynamically determine which partitions to process?

Posted: 08 Mar 2013 06:40 AM PST

Hello,I have a Tabular model with a partitioned fact table based on date field. I have data in there from jan 1 to jan 10 partitioned daily..thus 10 partitions.I now want to process the cube because I have new data for the 11th coming in.My questions are;1) How do I dynamically ask SSAS to create a fresh partition for day 11?2) How do I ask SSAS to process partition 11 ONLY.Thanks

[T-SQL] In Need of some expert query help

[T-SQL] In Need of some expert query help


In Need of some expert query help

Posted: 08 Mar 2013 10:52 AM PST

The below query I have been piecing together is really slow. I was wondering if any of you might be able to give me some ideas with regard to optimizing this. My goal is to create a view then leverage the view for an SSIS project. I'm thinking there MUST be a better solution. Thanks[code="sql"]SELECT 'MySource' AS SourceDB, 'MySite' AS 'SiteName', Rtrim(apibh.idvend) AS VendorNumber, Rtrim(apibh.idinvc) AS VoucherNumber, Sum(apibh.amtgrosdst) AS VoucherTotalDomestic, Sum(apibh.amtgrosdst) * ( apibh.exchratehc ) AS VoucherTotalUSD, (SELECT DISTINCT ( Sum(p.amtpaym) ) FROM aptcr r WITH(NOLOCK) INNER JOIN aptcp p ON r.btchtype = p.batchtype AND r.cntbtch = p.cntbtch AND r.cntentr = p.cntrmit INNER JOIN apibh H ON p.idvend = h.idvend AND p.idinvc = H.idinvc WHERE H.idinvc = apibh.idinvc GROUP BY H.idinvc) AS PayAmt, (Sum(apibh.amtgrosdst) - (SELECT DISTINCT ( Sum(p.amtpaym) ) FROM aptcr r WITH(NOLOCK) INNER JOIN aptcp p ON r.btchtype = p.batchtype AND r.cntbtch = p.cntbtch AND r.cntentr = p.cntrmit INNER JOIN apibh H ON p.idvend = h.idvend AND p.idinvc = H.idinvc WHERE H.idinvc = apibh.idinvc GROUP BY H.idinvc) ) AS OpenAmountDomestic, (Sum(apibh.amtgrosdst) - ((SELECT DISTINCT ( Sum(p.amtpaym) ) FROM aptcr r WITH(NOLOCK) INNER JOIN aptcp p ON r.btchtype = p.batchtype AND r.cntbtch = p.cntbtch AND r.cntentr = p.cntrmit INNER JOIN apibh H ON p.idvend = h.idvend AND p.idinvc = H.idinvc WHERE H.idinvc = apibh.idinvc GROUP BY H.idinvc) ) * ( apibh.exchratehc )) AS OpenAmountUSD, ( Select DATEDIFF(day,(SELECT DISTINCT ( Max(dbo.Udf_convert_int_date(r.datermit)) ) FROM aptcr r WITH(NOLOCK) INNER JOIN aptcp p ON r.btchtype = p.batchtype AND r.cntbtch = p.cntbtch AND r.cntentr = p.cntrmit INNER JOIN apibh H ON p.idvend = h.idvend AND p.idinvc = H.idinvc WHERE H.idinvc = apibh.idinvc GROUP BY H.idinvc),GETDATE()))AS DueDays, '' As AgeDays , (SELECT DISTINCT ( Max(dbo.Udf_convert_int_date(r.datermit)) ) FROM aptcr r WITH(NOLOCK) INNER JOIN aptcp p ON r.btchtype = p.batchtype AND r.cntbtch = p.cntbtch AND r.cntentr = p.cntrmit INNER JOIN apibh H ON p.idvend = h.idvend AND p.idinvc = H.idinvc WHERE H.idinvc = apibh.idinvc GROUP BY H.idinvc) AS PaidDate, dbo.Udf_convert_int_date(apibh.datebus) AS PostedDate, dbo.Udf_convert_int_date(apibh.datebus) AS AppliedDate, dbo.Udf_convert_int_date(apibh.dateinvc) AS AgingDate, dbo.Udf_convert_int_date(apibh.datedue) AS DueDate, dbo.Udf_convert_int_date(apibh.dateinvc) AS DocumentDate, NULL AS ReceivedDate, CASE WHEN ( apibh.datedisc ) = 0 THEN NULL ELSE dbo.Udf_convert_int_date(apibh.datedisc) END AS DiscountDate, apibh.codecurn AS CurrencyCode, apibh.exchratehc AS EffectiveExchangeRateFROM apibh AS APIBH WITH(NOLOCK)WHERE 1 = 1--AND APIBH.IDINVC = '22036'--AND APIBH.IDVEND = 'ABE129'GROUP BY ( apibh.idvend ), ( apibh.idinvc ), ( apibh.amtgrosdst ), ( apibh.amtgrosdst - Isnull(apibh.amtgrosdst, 0) - apibh.amtdiscavl ), ( ( apibh.amtgrosdst * apibh.exchratehc ) - Isnull(apibh.amtgrosdst, 0) * apibh.exchratehc ) - ( apibh.amtdiscavl * apibh.exchratehc ), ( apibh.datebus ), apibh.datebus, apibh.dateinvc, apibh.datedue, CASE WHEN ( apibh.datedisc ) = 0 THEN NULL ELSE dbo.Udf_convert_int_date(apibh.datedisc) END, apibh.amtdiscavl, apibh.codecurn, apibh.exchratehc, apibh.idtrx, Isnull(apibh.amtgrosdst, 0), apibh.amttotdist [/code]

Need guidance on how to Insert a new record between existing records

Posted: 08 Mar 2013 07:09 AM PST

Here is a the table structure:MemberNbr Varchar(11)MemberCardNumber Varchar(10)EffectiveDate IntTermDate IntSample data is:Membernbr MemberCardNumber EffectiveDate TermDate12345678909 A020129091 20120101 2012043012345678909 A020129091 20120501 2012063012345678909 A020129091 20120701 2012083112345678909 A020129091 20120901 0I receive an incoming file which indicates that this member updated his CardNumber toB020129091 on 20120516 and I need to update the table to reflect this change in all his recordsgoing forward from 20120516, overwriting, if necessary records which exist in the table and whichhave effective dates > than 20120516 while also maintaining the TermDates.Does anyone have some guidance on the SQL to do this?

Single value MIN and MAX dates from multiple rows

Posted: 08 Mar 2013 07:34 AM PST

This post has the code and an Excel Workbook with results and expected results (Red text)My query works for the desired results, except for the last two columns (they were added at the 11th hour).What I need is a query to capture is a single MIN and MAX date based from the values found for each PATIENT_ID. If you look at the excel sheet, you can see there are 7 records. I need MIN date from DATE_WRITTEN and MAX date from EXPIRES_ON. It's OK that they repeat, because the query is moving into Crystal Reports and Crystal is forgiving with duplicates. Here is the query[code="sql"]SELECT DISTINCT [Rxo].[SYS_ID] , [Rxo].[PATIENT_ID] , [Rxo].[DESCRIPTION] , [Rxo].[RX_NUMBER] , [JRxf].[MAX_FILL_NUM] , [Rxo].[PT_CASE_PHYSICIAN_SYS_ID] , CONVERT(VARCHAR(12) , [Rxo].[DATE_WRITTEN] , 110) AS DATE_WRITTEN , CONVERT(VARCHAR(12) , [Rxo].[EXPIRES] , 110) AS EXPIRES_ON , CONVERT(VARCHAR(12) , [Rx3].[MIN_START] , 110) AS MIN_START FROM [dbo].[RX_ORDER] AS Rxo LEFT JOIN ( SELECT MAX([Rxf].[REFILL_NUMBER]) AS [MAX_FILL_NUM] , [Rxf].[RX_ORDER_SYS_ID] FROM [dbo].[RX_FILL] AS Rxf GROUP BY [Rxf].[RX_ORDER_SYS_ID] ) JRxf ON [Rxo].[SYS_ID] = [JRxf].[RX_ORDER_SYS_ID] LEFT JOIN [dbo].[PATIENT] AS Pat ON [Rxo].[PATIENT_ID] = [Pat].[PATIENT_ID] LEFT JOIN [dbo].[PT_CASE] AS Ptc ON [Pat].[PATIENT_ID] = [Ptc].[PATIENT_ID] /*Not working*/ LEFT JOIN ( SELECT DISTINCT MIN([Rxo2].[DATE_WRITTEN]) AS MIN_START , [Rxo2].[PATIENT_ID] , [Rxo2].[RX_NUMBER] FROM [dbo].[RX_ORDER] AS Rxo2 GROUP BY [Rxo2].[PATIENT_ID] , [Rxo2].[RX_NUMBER] ) Rx3 ON [Rxo].[PATIENT_ID] = [Rx3].[PATIENT_ID] AND [Rxo].[RX_NUMBER] = [Rx3].[RX_NUMBER] WHERE [Ptc].[SITE_ID] = '0001' AND [Ptc].[CASE_STATUS_CODE] = 'A' AND [Rxo].[STATUS] = 'A' AND [Ptc].[PATIENT_ID] = 2000000000 ORDER BY [Rxo].[PATIENT_ID] , [Rxo].[RX_NUMBER][/code]

Need to find out the latest STATUS of each Consumers on a given date from their Activities

Posted: 08 Mar 2013 04:22 AM PST

Hi there,I've a ConsumerActivity table that records all the activities. Now, I need to find out the latest status for each Consumers on a given day. For example, I need to write a query that gives me ConsumerID and latest Status on '2012-04-01 00:00:00'.In this table, there is no activity on this date and as such I won't get any result. However, I want to get the result as: ConsumerID Status 101 ACTIVE 102 PREMIUMBelow is my simple scenario and query:USE [Sample]GO-- Create TableCREATE TABLE [dbo].[ConsumerActivity]( [ConsumerID] [varchar](10) NOT NULL, [ActivityDate] [datetime] NULL, [Status] [varchar](10) NULL) ON [PRIMARY]GO-- Insert Some Data into this tableGOINSERT INTO dbo.ConsumerActivity (ConsumerID, ActivityDate, Status)SELECT 101, '2012-01-10 00:00:00', 'INACTIVE'UNION ALLSELECT 101, '2012-05-20 00:00:00', 'ACTIVE'UNION ALLSELECT 102, '2012-02-10 00:00:00', 'BASE'UNION ALLSELECT 102, '2012-03-15 00:00:00', 'PREMIUM'GOThanks.

Strategy for n-level approvals

Posted: 08 Mar 2013 02:50 AM PST

I have a TimesheetMaster table which requires 1 or more "approvals". I've created an eventlog table to keep track of approval events. Now, if any approver rejects a timesheet, the approval process must begin again, but I don't want to lose track of all the events. If I didn't care about keeping track of all the events, I could just purge the approval events whenever there was a rejection event. What would your strategy be? One idea I had was to code every approval as "current", or "expired", so a rejection event would, instead of deleting all related events, would just mark them as "expired". My program would seek out only "current" approvals to see if the timesheet is finally approved. Other cool ideas? You guys always have the neatest tricks....

Query help.....

Posted: 08 Mar 2013 03:42 AM PST

Hi All,can any one help on below query declare @tbl table (TBLID int identity(1,1),ID int,patientName varchar(10),age int,city varchar(100),Mobile int)insert @tbl(ID,patientName,age,city,Mobile) select 1,'Ramesh',20,'HYD',12345678 union allselect 1,'Ramesh new',20,'HYDERABAD ' ,12345678 union allselect 1,'Ramesh new',20,'HYDERABAD ' ,87654321select * from @tbl TBLID ID patientName age city Mobile1 1 Ramesh 20 HYD 123456782 1 Ramesh 24 HYD 123456783 1 Ramesh new 20 HYDERABAD 87654321i want output as mentioned below format which columns data got changed Columns OLDDATA NEWDATApatientName Ramesh Ramesh newCity HYD HYDERABAD Mobile 12345678 87654321can any one help on this

Update master from detail

Posted: 08 Mar 2013 03:13 AM PST

I have a timesheet master table and a related timesheet detail table. Each detail row could trigger need for an override approval of the whole timesheet. Currently, I have an ASP.Net application that checks for 1 or more detail rows needing override approval, and updates the master column "RequiresOV" (int). I'm thinking of setting a trigger on the detail table to do the update on the master automatically anytime a row requiring override permission is added, updated, or deleted. It would simply add up all the detail rows requiring Override, and update the master appropriately. Alternatively, I could make the column in the master a computed column that does roughly the same thing. Override rows are uncommon, say 5% of timesheets contain an override row. The average timesheet has maybe 300 rows, and there are maybe a dozen timesheets submitted per week. Some are submitted in bulk (uploaded from a workbook...) and some are hand-entered in an ASP.Net app, one row at a time.What do you guys think?

[SQL Server 2008 issues] How to get Below T-SQL qury Output..?

[SQL Server 2008 issues] How to get Below T-SQL qury Output..?


How to get Below T-SQL qury Output..?

Posted: 08 Mar 2013 05:23 PM PST

[size="3"][font="Courier New"]Hi All,This is the query I have writtenDECLARE @FromDate DATETIMEDECLARE @EndDate DATETIMESET @FromDate = '2013-01-01 00:00:00.000'SET @EndDate = '2013-02-13 00:00:00.000'SELECT year(sd.FKDAT) As YEARWISE_DATA, sg.KUNNR As PARTY, sg.NAME1 As NAME, SUM(sd.FKIMG) As QUANTITY, SUM(sd.NETWR) As VALUE_IN_FC, SUM(sd.NTGEW) As WEIGHTFROM Sales_group sg WITH(NOLOCK)INNER JOIN SALES_DATA sd WITH(NOLOCK) ON sg.KUNNR = sd.KUNAGWHERE sd.FKDAT >= @FromDate AND sd.FKDAT <= @EndDateGROUP By sd.FKDAT, sg.KUNNR, sg.NAME1ORDER By 1, sg.KUNNR ASC[b]Below is the output i am getting,[/b]2013 HA010 ADK 360.000 36988.20 9206.4342013 HA010 ADK 205.000 31363.80 9299.8482013 HA018 AGRI 295.000 42646.25 12578.1492013 HA018 AGRI 119.000 29587.75 8816.1122013 HA018 AGRI 21.000 10289.65 2882.4882013 HA018 AGRI 249.000 57764.20 17605.415[b]Required Output I want[/b]2013 HA010 ADK 565.000 68352.00 18506.312013 HA018 AGRI 684.000 140287.85 41882.164Thanks & Regards,Bhushan[/font][/size]

query help

Posted: 08 Mar 2013 03:11 AM PST

Hi All,can any one help on below query declare @tbl table (TBLID int identity(1,1),ID int,patientName varchar(10),age int,city varchar(100),Mobile int)insert @tbl(ID,patientName,age,city,Mobile) select 1,'Ramesh',20,'HYD',12345678 union all select 1,'Ramesh new',20,'HYDERABAD ' ,12345678 union all select 1,'Ramesh new',20,'HYDERABAD ' ,87654321 select * from @tbl TBLID ID patientName age city Mobile1 1 Ramesh 20 HYD 123456782 1 Ramesh 24 HYD 123456783 1 Ramesh new 20 HYDERABAD 87654321i want output as mentioned below format which columns data got changed Columns OLDDATA NEWDATApatientName Ramesh Ramesh newCity HYD HYDERABAD Mobile 12345678 87654321please prepare any dynamic query

Ensuring SARGable Stored Proc Search Parameter

Posted: 08 Mar 2013 05:08 AM PST

I am creating a query within a stored procedure and I am trying to create a parameter where I can pass a value to the query, but I want the parameter to be SARGable.The field is TIMES_TESTED SMALLINT and is either a 1 (first-time test taker) or NULL (retest). I don't have any control of the data in the table, but I have complete control over how the stored procedure works.So the lazy way to do this would be to accept a parameter for the stored procedure:[code="sql"] ... , @firstTime smallint = NULL -- NULL any record, 0 retests, 1 first-time[/code]And in my query's WHERE clause, use this abomination:[code="sql"] AND (@firstTime IS NULL OR ISNULL(TIMES_TESTED, 0)=@firstTime)[/code]So how could I improve this?

SSRS 2008R2 showing counties (not just states)

Posted: 08 Mar 2013 01:53 PM PST

I have a hopefully simple question. SSRS 2008R2 can do maps by state just fine... is there an easy way to get City>County>State rollup data somewhere or get shapefiles for Counties in the US? I'm working on a database for someone that would be infinitely more useful if I could show like a heat map for sales by county across the country... there's a really cool map [url=http://www.mssqltips.com/sqlservertip/2552/creating-an-ssrs-map-report-with-data-pinpoints/]here[/url] that shows the county lines in it... and that's the part I want - the Counties. The granularity of the data I have is not too good, so county-level or so is about right.Is there an easy way to create a map like that? (like a color-coded map from election night, but a county-by-county instead of state-by-state?) If so, how? And where would I get the shapefiles for the counties?Thanks!Pieter

Monthly balance from two tables

Posted: 25 Dec 2012 04:58 PM PST

I have following information with two tables[code="sql"]CREATE TABLE [dbo].[voucherCr]( [srno] [int] IDENTITY(1,1) NOT NULL, [vouchertype] [nvarchar](50) NULL, [voucherprefix] [nvarchar](50) NULL, [voucherno] [nvarchar](50) NULL, [crparty] [int] NULL, [cramount] [float] NULL)[/code][code="sql"]CREATE TABLE [dbo].[voucherDr]( [srno] [int] IDENTITY(1,1) NOT NULL, [vouchertype] [nvarchar](50) NULL, [voucherprefix] [nvarchar](50) NULL, [voucherno] [nvarchar](50) NULL, [drparty] [int] NULL, [dramount] [float] NULL)[/code][code="sql"]CREATE TABLE [dbo].[voucher]( [srno] [int] IDENTITY(1,1) NOT NULL, [vouchertype] [nvarchar](50) NULL, [voucherprefix] [nvarchar](50) NULL, [voucherno] [nvarchar](50) NULL, [dt] [date] NULL, [details] [nvarchar](255) NULL, [invoicetype] [nvarchar](50) NULL)[/code]Here between all three tables, vouchertype,voucherprefix,voucherno columns are inter related.Now I have following data for three tablesVoucher[code="sql"]srno vouchertype voucherprefix voucherno dt details invoicetype1 PURCHASE P 1 2012-12-25 RETAIL INVOICE2 PAYMENT R 1 2012-12-25 3 PURCHASE P 2 2012-12-25 RETAIL INVOICE4 PURCHASE P 3 2012-12-25 RETAIL INVOICE[/code]VoucherCr[code="sql"]srno vouchertype voucherprefix voucherno crparty cramount1 PURCHASE P 1 2 550002 PAYMENT R 1 1 550003 PURCHASE P 2 2 280004 PURCHASE P 3 2 30550[/code]VoucherDr[code="sql"]srno vouchertype voucherprefix voucherno drparty dramount1 PURCHASE P 1 4 550002 PAYMENT R 1 2 550003 PURCHASE P 2 4 280004 PURCHASE P 3 4 290005 PURCHASE P 3 97 11606 PURCHASE P 3 98 2907 PURCHASE P 3 46 508 PURCHASE P 3 66 50[/code]Now I want results as followsMonth Credit DebitDecember 168550 168550

Join two tables

Posted: 07 Mar 2013 11:22 PM PST

I have two tables with following dataTable1[code="sql"]Month dramount------ ---------April 1709.75March 5000[/code]Table2[code="sql"]Month cramount------ ---------March 6295[/code]I want to join two tables and want results as following[code="sql"]Month dramount cramount------ --------- ----------April 1709.75 NULLMarch 5000 6295[/code]How can i achieve this?

How to update same record that fires trigger?

Posted: 08 Mar 2013 07:25 AM PST

There is a flag column that basically indicates that record changed since the last "housecleaning".Can I create a trigger on the table that would set that flag "True" just for that particular record that changed?Thank you

Transaction log in Subscriber database for Transactional Replication

Posted: 04 Mar 2013 08:31 AM PST

I have created replication between two SQL Server 2008R2 databases and it has been functioning well since the initial setup 3 months ago. The publisher database is the back-end for an OLTP application with 50 users who are working constantly, making changes and add new information. The Subscriber database is used for reporting. Today I noticed that the Subscriber database was set to Full Recovery Mode with no transaction log backups. I panicked for a minute and took a quick look at the Transaction Log drive on the Subscriber server - it has 50% free space, 24 GB. I was amazed that I had been avoiding what normally would cause a drive-full issue.My question is this. Since replication is constantly making changes to the Subscriber database (updating various tables as they change in the Publisher database), why is the Subscriber log file not filling up and giving me grief since I don't run backups on the Subscriber database - and haven't done for the entire three months this has been running? Gail Shaw mentioned in another forum question while I was researching this, that 'Replication does not in any way use the Subscriber's log file'. I'm not sure that I understand the seeming contradiction been the updating going on in the Subscriber database, Full Recovery mode, no Transaction Log backups taking place and the Transaction Log not filling up.Can anyone enlighten me and/or point me to articles that might help me understand this more thoroughly??Thank you. Willem

Update field in dimDate table

Posted: 08 Mar 2013 05:47 AM PST

HiI have a dimDate table that I need to update a field DateFull = '1900-01-01 00:00:00.000' = '1900-01-02 00:00:00.000' etc.....I need to change to = '1900-01-02 23:59:59.000'ThanksJoe

Performance difference between LIKE and CHARINDEX?

Posted: 08 Mar 2013 06:02 AM PST

I have a T-SQL script that looks through the firstname column and modifies any firstname with and '&' anywhere in the value. It then blanks out the part of the name from the '&'.I know that LIKE '%&%' results in a table scan. I also know that I can check for a value of > 0 for CHARINDEX but suspect it will also perform a table scan.My question is whether there is some outside chance that one of the methods would be more efficient than the other.Thanks!

expression syneax for multiple where clause

Posted: 08 Mar 2013 02:18 AM PST

Hi all,I am trying to create the Expression equivalent of the following query:SELECT count(TasksName) FROM [db].[dbo].[testdetails] where TopLevelProjectName = '40 Project Folder' and TasksName = 'Project Info'I basically need a working version of this incorrect syntax:=CountDistinct(IIF(Fields!TopLevelProjectName.Value = "40 Project Folder" AND Fields!TasksName.Value = "Project Info"))Thanks in advance,Sharon

Validating trace file events

Posted: 08 Mar 2013 05:09 AM PST

Hello,If I have a trace running and populating some *trc files..My question is how can I validate that i am indeed tracing the correct sp_trace_setevent filters I have set ?Are there dynamic tables where I can fetch the information on what is being traced?Thank you !JR

View Must Return Nulls

Posted: 08 Mar 2013 12:13 AM PST

Hi ,I have a view called vw_TrusteeProductFilter that has a CTE that returns certain number of records.This View is being used in 55 store procedures.But,Now we do not want this view to retrieve any records . So,Instead of unchecking this view in all those store procedures.I just want the view to return nulls.Below is the viewALTER VIEW [Capacity].[vw_TrusteeProductFilter]AS WITH cteTrusteeCapReservations (TrusteeID, ProductTypeLookup, ProductClassLookup, FilterClassLookup,Count) AS ( SELECT ISNULL(BranchTrusteeID,LenderTrusteeID) AS TrusteeID, ProductTypeLookup, ProductClassLookup, FilterClassLookup, COUNT(*) AS [Count] FROM Capacity.tCapacityReservation WHERE CreationDateTime > CONVERT(DATE,GETDATE()) GROUP BY ISNULL(BranchTrusteeID,LenderTrusteeID), ProductTypeLookup, ProductClassLookup, FilterClassLookup UNION SELECT LenderTrusteeID, ProductTypeLookup, ProductClassLookup, FilterClassLookup, COUNT(*) AS [Count] FROM Capacity.tCapacityReservation WHERE CreationDateTime > CONVERT(DATE,GETDATE()) GROUP BY LenderTrusteeID, ProductTypeLookup, ProductClassLookup, FilterClassLookup ) SELECT trc.TrusteeID , trc.CapacityTypeLookup , 3 AS CapacityLevelLookup , CASE WHEN trc.ProductTypeLookup = 1 AND trc.ProductClassLookup = 1 THEN 1 WHEN trc.ProductTypeLookup = 1 AND trc.ProductClassLookup = 2 THEN 2 WHEN trc.ProductTypeLookup = 2 THEN 3 END AS HomeLoanProductTypeLookup , trc.FilterClassLookup , trc.ReferralCount + ISNULL(cte.Count,0) AS ReferralCount , trc.ReferralCapacity , CASE WHEN trc.ReferralCount + ISNULL(cte.Count,0) >= trc.ReferralCapacity THEN 1 ELSE 0 END AS IsCappedOut FROM dbo.tTrusteeReferralCapacity trc JOIN ( SELECT 6 AS CapTypeLookup FROM Filter.tCapacityHoliday a WHERE CapacityHolidayDate = CONVERT(DATE,GETDATE()) UNION SELECT 5 WHERE DATEPART(dw,GETDATE()) IN (1,7) AND NOT EXISTS (SELECT 6 FROM Filter.tCapacityHoliday a WHERE CapacityHolidayDate = CONVERT(DATE,GETDATE())) UNION SELECT 4 WHERE DATEPART(dw,GETDATE()) IN (2,3,4,5,6) AND NOT EXISTS (SELECT 6 FROM Filter.tCapacityHoliday a WHERE CapacityHolidayDate = CONVERT(DATE,GETDATE())) ) ctl ON trc.CapacityTypeLookup = ctl.CapTypeLookup LEFT JOIN cteTrusteeCapReservations cte ON trc.TrusteeID = cte.TrusteeID AND trc.ProductTypeLookup = cte.ProductTypeLookup AND trc.ProductClassLookup = cte.ProductClassLookup AND trc.FilterClassLookup = cte.FilterClassLookup WHERE ISNULL(trc.ProductTypeLookup,-1) IN (1,2) AND ISNULL(trc.ProductClassLookup,-1) IN (-1,1,2) AND ISNULL(trc.FilterClassLookup,-1) > 0 AND ISNULL(trc.[DayOfWeek],-1) = -1 AND ISNULL(trc.IsFixedFilter,-1) = -1 AND trc.PropertyState IS NULL AND ISNULL(trc.QFormSubTypeLookup,-1) = -1

partition_id VS. hobt_id

Posted: 02 Mar 2013 09:44 PM PST

Hello!There is 2 columns in [url=http://msdn.microsoft.com/en-us/library/ms175012.aspx]sys.partitions[/url] view - [b]partition_id[/b] and [b]hobt_id[/b]. From my point of view/experience for any given row from this view [b]partition_id[/b] always equal [b]hobt_id[/b] and I never see the 2 different values. It's seems reasonable, because (simplifying complex thing) we can say: "partition is hobt, hobt is partition". But at the same time article about another view - [url=http://msdn.microsoft.com/en-us/library/ms189792(v=sql.105).aspx]sys.allocation_units[/url] tell us:[quote]If type = 1 or 3, container_id = [b]sys.partitions.hobt_id[/b].If type is 2, then container_id = [b]sys.partitions.partition_id[/b].[/quote]So - these 2 values can ever be NOT equal?? As I said I had never see this in practical. Can you please give the example of table (and DB, if it needs), for which [b]partition_id[/b] and [b]hobt_id[/b] will be NOT equal?

How to create database from .mdf file only

Posted: 10 Jan 2012 04:55 AM PST

Hi,I have installed SQL Server 2012 and downloaded sanple database from the below link, which has AdventureWorks2008R2_Data.mdf for download.http://msftdbprodsamples.codeplex.com/releases/view/55330Now, how to restore AdventureWorks database from AdventureWorks2008R2_Data.mdf file?Thanks

Performance Tuning Question

Posted: 07 Mar 2013 07:48 PM PST

In trying to get a better grip on performance tuning I have a following example which I was wondering if people would comment on. The code is as follows:[code="sql"]Select 'All' as OpID , 'All Ops' as 'Op Name' , 1union allSelect Distinct LEFT(Oprid,4), Name + ' (' + LEFT(Oprid,4) + ')', 2from Dynamicsv5Realtime.dbo.RouteOprTable where dataareaid ='AJB'and Oprid between '3000' and '4000'and LEFT (NAME,3) <> 'DNU'Order by 3,2[/code]I have also attached the plan, and the index is below.[code="sql"]USE [DynamicsV5Realtime]GO/****** Object: Index [I_354OPRIDX] Script Date: 03/08/2013 10:41:13 ******/ALTER TABLE [dbo].[ROUTEOPRTABLE] ADD CONSTRAINT [I_354OPRIDX] PRIMARY KEY CLUSTERED ( [DATAAREAID] ASC, [OPRID] ASC)WITH (SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF) ON [PRIMARY][/code]Is there anything that can be done to improve the query. What about the sorts and merge joins, can they be improved upon? How does the DISTINCT and LEFT function impact things, and should the index have an include for NAME?Thanks for any tips

Not consuming memory

Posted: 08 Mar 2013 12:37 AM PST

My company is rolling out a new product that was supposed to go to beta on March 1. That has been deferred, so the pressure is to get everything done yesterday.It is a complex multi-tiered application running web services, Citrix published apps, multiple databases and instances on a virtualized cluster. Sort of a kitchen sink of input sources. I had ZERO input on the database design and system architecture. So I'm having to learn the system in the middle of the problem.Which probably sounds familiar to most people here, no?The load test was focused on the web services so I was not allowed to capture any SQL statistics. I was only able to watch the defaults available through the Activity Monitor.The strangest thing during the test from the database end is that memory utilization peaked at 1.5 GB on an instance that had 28 GB assigned to it. Today we tested the instance with a few memory hogging scripts just to show that they were configured properly and, as expected, the memory was easily consumed.The load test had some interesting things happen. As the web requests loaded up the front end, the CPU climbed linearly - a nice direct correlation to the number of request hitting the web servers. But as soon as the CPU hit 25% it leveled off even as we doubled, tripled and quadrupled the number of web hits.More interesting is that there were two SQL instances in the test and when the CPU leveled off the waits displayed in the Activity Monitor started climbing up into the thousands. Even more curious is that the waits on each instance were inversely correlated. When one would peak, the other would be at a minimum in a very regular saw toothed pattern.So I have to recommend a "solution" without having any data since I wasn't allowed to pull any during the test.FWIW, disk I/O was fine. Today's memory test showed that the memory allocation of the instance is fine.My first recommendation is going to be to put all the databases on the same instance (there were two databases, one one each instance, that talked to each other a great deal) just to see how it effects the waits and the cross talk between those two databases.Then look at tempDB issues and insist that I be allowed to pull some performance counters DURING the test.I found the oscillation of peak waits very interesting. Has anyone ever seen this type of behavior?I'm not expecting any magic answers here. More just some possibilities so I can drill down into the lower levels.

How to update multiple tables in sql server 2008 ?

Posted: 07 Mar 2013 10:58 PM PST

Hi All,I want to update multiple columns from multiple tables in a single UPDATE Query...Just want to do like below query...UPDATE Table1, Table2SET Table1.Column1 = 'one' ,Table2.Column2 = 'two'FROM Table1 T1, Table2 T2WHERE T1.id = T2.idand T1.id = 'id1'Does Sql Server 2008 provide any mechanism to do so?If Sql Server 2008 provide such nice approach, please share some links with me!Thanks!

Database Mail

Posted: 07 Mar 2013 10:23 PM PST

Hi , How to delete the previously created database mail profiles through SSMS

REG: disk Space

Posted: 07 Mar 2013 06:51 PM PST

Hi, In my C: drive where my sql runs i have only 2 GB left.. I checked log spaces for all databases i found that my one of my production db is 123 GB in size and having a log space of 19 GB. Can i shrink that log file .

Executionlog2 table

Posted: 07 Mar 2013 08:13 PM PST

How much time Executionlog2 table and catlog table in ReportServer database hold the data?Does it store all historical data and it always remain in table or after some time of interval period sql server remove the data from both the table?????????????????

Versioning SQL Objects

Posted: 07 Mar 2013 07:12 PM PST

Good morning set-based thinkers.I wonder if you can help me. In my previous contract, as part of a DBA team, I helped maintain about 1800 databases over 97 servers, with thousands of stored procs.We had a slick release management process based on Visual SourceSafe, and we used the VSS tokens within the sprocs to embed the version info etc. into the comment block upon each check in.My next contract finds me somewhere that has zero release management. As part of my work I have proposed a system similar to the one I have experience with, as I know that it works.However, it has hit a stumbling block with my customer, because he has done thirty seconds of Googling and discovered that VSS is only on extended support until 2017. He's concerned that I'm proposing something with built-in obsolescence. I think that he has a point. Kind of. Four years is an eternity for the IT here. It would be ample time to establish a viable source code control system - and then start looking for an alternative that we can migrate to.But - my question is - am I missing a trick?I only know VSS. I know that TFS can't embed version info as it doesn't have keyword expansion. Or am I wrong?Can Git or Subversion do this?Will Red Gate source code control do this?What's [i]your [/i]solution?Mr. C

Search This Blog