Saturday, July 20, 2013

[how to] I've backlogged queries on my SQL Server - what now?

[how to] I've backlogged queries on my SQL Server - what now?


I've backlogged queries on my SQL Server - what now?

Posted: 20 Jul 2013 05:07 PM PDT

So, I'm a bit of a plonker right...

By accident, I issued in the region of 500,000 individual queries against my DB. It's been running for 6 hours now. I restart the server, they continue to insert. I need them to stop - How can I do this?

How to use group by to select only group row in these case scenarios

Posted: 20 Jul 2013 08:22 PM PDT

I'm learning SQL for the first time and I was given these queries to do .

I wrote those statements to solve the problem and they work, but they don't use group by.

Can somebody please suggest how I change this? They should only use group by and maybe where, I don't think this assignment is supposed to use anything more advanced.

What is the latest birth date of the presidents? (Youngest President)

SELECT birth, first_name, last_name   FROM sampdb.president   WHERE birth= (select MAX(birth) FROM sampdb.president);  +------------+------------+-----------+   | birth      | first_name | last_name |  +------------+------------+-----------+   | 1946-08-19 | William J. | Clinton   |  +------------+------------+-----------+   1 row in set (0.00 sec)    SELECT birth, first_name, last_name   FROM sampdb.president   WHERE birth>=ALL(SELECT MAX(birth) FROM sampdb.president);    +------------+------------+-----------+  | birth      | first_name | last_name |  +------------+------------+-----------+  | 1946-08-19 | William J. | Clinton   |  +------------+------------+-----------+  1 row in set (0.00 sec)  

The earliest? (Oldest president)

SELECT birth, first_name, last_name   FROM sampdb.president   WHERE birth=(SELECT MIN(birth) FROM sampdb.president);  +------------+------------+------------+  | birth      | first_name | last_name  |  +------------+------------+------------+  | 1732-02-22 | George     | Washington |  +------------+------------+------------+  1 row in set (0.00 sec)  

'ambiguous column name' error refers to database, table, and column

Posted: 20 Jul 2013 10:14 AM PDT

sqlite3 gives an 'ambiguous column name' error with the query below. The error itself references what seems to be about as an unambiguous column name as one can imagine, identifying the database as 'sourcedb', the table as 'answers', and the field as 'markup'. Why is this query triggering the error?

sqlite> SELECT answers.*          FROM   sourcedb.answers                 INNER JOIN sourcedb.questions_tests                   ON sourcedb.questions_tests.testskey = '121212eczema'                 INNER JOIN sourcedb.answers_questions                   ON sourcedb.questions_tests.questionskey = sourcedb.answers.questionskey                 INNER JOIN sourcedb.answers                   ON sourcedb.answers.answerskey = sourcedb.answers_questions.questionskey;   ...>    ...>    ...> Error: ambiguous column name: sourcedb.answers.markup  

Some additional detail...

sqlite> pragma table_info(answers);  0|markup|TEXT|0||0  1|identifier|TEXT|0||0  2|text|TEXT|0||0  3|answerskey|INTEGER|0||1  4|answertype|TEXT|0||0  5|ReadPerms||0||0  6|UpdatePerms||0||0  7|DeletePerms||0||0  sqlite> pragma database_list;  0|main|  2|sourcedb|/root/.ttest/database/sqlite/ttest-simple.sqlite  

Moving large databases

Posted: 20 Jul 2013 09:08 PM PDT

I have a centos server and /var/lib/mysql/ is 125GB (disk has 1GB free space).

Ordinarily I would use mysqldump to backup the databases, but I don't normally work with such large databases, so I need to know the safest way of copying the databases over to a new server.

All advice appreciated!

MySQL - password encryption type

Posted: 20 Jul 2013 09:13 AM PDT

I would like to confirm if MySQL uses SHA1 for 'PASSSWORD' hashing and why test below doesn't work:

mysql> GRANT USAGE ON *.* TO 'test'@'localhost' IDENTIFIED BY 'password';  Query OK, 0 rows affected, 1 warning (0.01 sec)    mysql> SELECT user,host,password FROM mysql.user WHERE user='test';  +------+------------+-------------------------------------------+  | user | host       | password                                  |  +------+------------+-------------------------------------------+  | test | localhost | *2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19 |  +------+------------+-------------------------------------------+  1 row in set (0.01 sec)    mysql> \q  Bye  [root@test ~]# echo -n "password" | sha1sum | awk '{print toupper($1)}'  5BAA61E4C9B93F3F0682250B6CF8331B7EE68FD8  

Databases not coming up in SQL Server 2008 R2

Posted: 20 Jul 2013 11:14 AM PDT

Here is something happened on my system that I just figured out.

None of the databases are showing up in Object Explorer window even the server has been connected successfully.

It is showing only one database that I have created exactly 1 year back.

How can I get those databases back, they are really important.

Impoving performance of query by using greater than on a primary key

Posted: 20 Jul 2013 06:05 AM PDT

I have a Reporting table where i store description

tableA    sno   | Project |name     | description      | mins |  1     | prjA    |nameA    |ABC -10% task done|  30  |  ...  3000  | prjA    |nameB    |ABC -70% task done|  70  |  

i want to query the description field and save in another table

tableB    id | valueStr | total_mins  | last_sno  1  |  ABC     | 100         | 3000  

if there is no entry in second table , i create a entry with default values

if there is and entry in second table , i update 2nd table , with the total_mins and increment the last_sno to that value say 3300 , so that the next time i query this table i get values from second table and based on the last_sno

Query    SELCT last_sno FROM tableB where valueStr ='ABC'  

the first 3 characters in the description field

SELECT max(sno), sum(mins) FROM tableA   where sno > last_sno and description like 'ABC%'  

Since the first table has million of rows so, i search the first table with sno > last_sno , so that should help performance right ?

but the explain shows that it scans the same no of rows , when i query the first table from the first sno

PostgreSQL: Group By primary key or DISTINCT increase query time over 1000x with limit

Posted: 19 Jul 2013 11:23 PM PDT

Also see http://stackoverflow.com/questions/17741167/hibernate-jpa-improve-performance-of-distinct-query but I realized this is mainly a PostgreSQL issue.

My application uses a 3rd party extension to PostgreSQL to for searching chemical structures. This is in general slow. I can not change the SQL directly as the application uses hibernate and native query is not an option.

I have a many-to-many relationship and the "Link-table" has an additional column. Basically I have a Mixture that consists of elements and an element occurs in the mixture at a certain percentage. favorite

I use Spring-Data JPA with QueryDSL, hibernate and PostgreSQL. I have a query with 2 Joins It's a many too many with a link-table that has additional columns. Bascially I have a Mixture that consists of elements and an element occurs in the mixture at a certain percentage.

I'm now searching all Mixtures that contain an element matching the given criteria. Because a mixture can have multiple elements that match the criteria, the query may return the same entity multiple times. I want to prevent that hence DISTINCT or GROUP BY primary key.

The query is also paged meaning it uses limit and offset. The query runs perfectly fine without either distinct or group by but then I can get duplicate rows. If I add either group by or distinct query is over 1000 times slower.

Query with DISTINCT (note SQL from hibernate):

select distinct      simplecomp0_.chem_compound_id as chem1_0_,       --snipped about 10 more columns all short varchar or date fields      from simple_compound simplecomp0_       inner join compound_composition compositio1_ on simplecomp0_.chem_compound_id=compositio1_.chem_compound_id       inner join chemical_structure chemicalst2_ on compositio1_.chemical_structure_id=chemicalst2_.structure_id   where       chemicalst2_.structure_id  @ ('CCNc1ccccc1', '')::bingo.sub  limit 5   offset 5  

EXPLAIN ANALYZE with DISTINCT:

"Limit  (cost=5984.58..5984.63 rows=5 width=1645) (actual time=6342.541..6342.543 rows=5 loops=1)"  "  ->  HashAggregate  (cost=5984.53..5989.79 rows=526 width=1645) (actual time=6342.538..6342.542 rows=10 loops=1)"  "        ->  Nested Loop  (cost=0.00..5971.38 rows=526 width=1645) (actual time=7.289..6166.512 rows=128527 loops=1)"  "              ->  Nested Loop  (cost=0.00..4445.81 rows=526 width=8) (actual time=7.281..5694.663 rows=128527 loops=1)"  "                    ->  Index Scan using idx_chemical_structure on chemical_structure chemicalst2_  (cost=0.00..20.26 rows=526 width=8) (actual time=7.262..5013.620 rows=128508 loops=1)"  "                          Index Cond: (chemical_structure @ ROW('CCNc1ccccc1'::text, ''::text)::bingo.sub)"  "                    ->  Index Only Scan using compound_composition_pkey on compound_composition compositio1_  (cost=0.00..8.40 rows=1 width=16) (actual time=0.003..0.004 rows=1 loops=128508)"  "                          Index Cond: (chemical_structure_id = chemicalst2_.structure_id)"  "                          Heap Fetches: 128527"  "              ->  Index Scan using idx_pk on simple_compound simplecomp0_  (cost=0.00..2.89 rows=1 width=1645) (actual time=0.002..0.003 rows=1 loops=128527)"  "                    Index Cond: (chem_compound_id = compositio1_.chem_compound_id)"  "Total runtime: 6344.584 ms"  

also http://explain.depesz.com/s/ocC

The long time is caused by searching the 3rd party index for chemical structure search. For some reason the whole indexed is searched.

If the distinct is removed, limit and offset are correctly applied to the 3rd part index and query is fast:

EXPLAIN ANALYZE:

"Limit  (cost=56.76..113.52 rows=5 width=1645) (actual time=11.135..11.472 rows=5 loops=1)"  "  ->  Nested Loop  (cost=0.00..5971.38 rows=526 width=1645) (actual time=10.783..11.469 rows=10 loops=1)"  "        ->  Nested Loop  (cost=0.00..4445.81 rows=526 width=8) (actual time=10.774..11.406 rows=10 loops=1)"  "              ->  Index Scan using idx_chemical_structure on chemical_structure chemicalst2_  (cost=0.00..20.26 rows=526 width=8) (actual time=10.755..11.313 rows=10 loops=1)"  "                    Index Cond: (chemical_structure @ ROW('CCNc1ccccc1'::text, ''::text)::bingo.sub)"  "              ->  Index Only Scan using compound_composition_pkey on compound_composition compositio1_  (cost=0.00..8.40 rows=1 width=16) (actual time=0.006..0.006 rows=1 loops=10)"  "                    Index Cond: (chemical_structure_id = chemicalst2_.structure_id)"  "                    Heap Fetches: 10"  "        ->  Index Scan using simple_compound_pkey on simple_compound simplecomp0_  (cost=0.00..2.89 rows=1 width=1645) (actual time=0.004..0.004 rows=1 loops=10)"  "              Index Cond: (chem_compound_id = compositio1_.chem_compound_id)"  "Total runtime: 12.052 ms"  

Also http://explain.depesz.com/s/gU2

Is there anyway I can tune PostgreSQL to apply the 3rd party index only according to the limit and offset clause when using distinct?

EDIT:

After further thinking about the issue I came to the conclusion that there is no solution that I can implement. With group by or distinct the whole query obviously must be run regardless of limit clause. And if the whole query is run the 3rd party index must be used an that takes time (without that index such a search would take minutes not seconds).

Now about statistics. Here a quote from supplier:

The cost for structure index for operator @ is underestimated and hard coded, for Postres > to use index almost in all cases. Again, as I mentioned, the cost-estimation function is > not implemented yet (we'll let you know when it's ready).

Restore to last transaction log restore

Posted: 20 Jul 2013 03:20 PM PDT

We have a log shipping setup where we replicate from our primary database to a secondary. Now that database is, by the way of log shipping, in standby and thus read-only. We set it up so we could run reports from it, but it seems the reporting software used needs to go through the application server which needs to log onto the server - in this case, Microsoft Dynamics AX and Microsoft Dynamics ERP.

Our plan is to have a teritary database - this would also get log backups via log shipping, but in a different way - we would like to roll the database back to what it was at the last transaction log restore, then restore the latest log file(s). Is this possible, and how would we go about it?

EDIT 2013/07/21:

Allow me to rephrase my question. If I have a SQL database with full transaction logging, can I tell it to roll back all changes to a certain point, and if so, how may I accomplish this?

Thanks

SQL server agent SSIS error

Posted: 20 Jul 2013 01:30 AM PDT

I get the following error when I execute my package as a SQL server agent job.

It is an SSIS 2008 package running on a SQL Server 2008 instance. My package security is DontSaveSensitive.

I don't even know how to begin fixing this error.

Where should I check first?

Date        a value of time  Log     Job History (MyJob)    Step ID     1  Server      PCTSQL004  Job Name        MyJob  Step Name       Job_1  Duration        00:00:00  Sql Severity        0  Sql Message ID      0  Operator Emailed          Operator Net sent         Operator Paged        Retries Attempted       0    Message  Executed as user: CS\DmcSysManager. The process could not be created for step 1 of job 0x63BB5A86DB23F947866D2A806BE4CC6B (reason: A required privilege is not held by the client).  The step failed.  

Is this a good strategy for importing a large amount of data and decomposing as an ETL?

Posted: 20 Jul 2013 02:31 PM PDT

I have a set of five tables (a highly decomposed schema for an ETL if I understand the nomenclature) that I'm going to load via bulk import, then run some inserts from those five tables into a SLEW of other tables, including inserts that just rely on the values I just inserted into the first tables.

I can do my inserts as an A, B, C process, where I insert into the first table, then insert into some table S where exists in A + T (being some other table that has preloaded "configuration data"), then inserting into Z where exists in B + U, etc.

Should I be trying to batch those inserts with a cursor (I know, stone the traitor) or should I just run the raw insert into scripts and let the server die a thousand heat deaths? Basically I'm worried about starving the server or causing it to collapse from too many index or something else with inserts.

Should I stage out the inserts as:

  1. Insert into one set of tables
  2. Insert into secondary tables based on the first tables
  3. Insert into tertiary tables, etc.

OR should I insert into all the tales where the data is needed but do it via cursor, in a "for loop" style pattern of 100k rows at a time.

FWIW, this is a behavior I saw from the DBAs at my last job, so I figure that's "what I'm supposed to do" (the batch process via cursors) but maybe I don't understand enough about what they were doing (they were also live-loading into systems that already had data, and were loading new data afterwards).

Also bear in mind that I'm normally a C# dev, but I've got the most TSQL experience here and I'm trying to make the best process I can for raw-loading this data as opposed to our "current" method that is mostly webservice fetches and NHibernate save-commits.

Things I think are important to the question:

  1. There will be no other load on this server when I do this, I will have complete and total control of the box, and I'm the only user interested here (this is the initial data load before anyone else can do anything with the box)
  2. There are indexes on all the tables, where appropriate, and there are FKs and other predicates
  3. The entire db system is in use for slews of other clients right now, so I know the indexes are needed for operation, but if I should do something to disable those and re-enable them to make the inserts etc faster, I'm looking for those sorts of optimization hacks.
  4. I'm waiting on sample data but I expect the maximum records in a given table to be about a million rows, fairly wide, and that I'm going to insert into those other tables at most a million rows for the ancillary tables that can come secondary or tertiary.
  5. The hardware is "average".

Dropping Hypothetical Indexes

Posted: 20 Jul 2013 11:31 AM PDT

In the past I thought I'd deleted hypothetical indexes using either a DROP INDEX statement for clustered indexes and DROP STATISTICS statement for non-clustered indexes.

I have a database that is full of DTA remnants that I would like to cleanup; however, when I try to drop the object I always receive an error telling me that I cannot drop the object "because it does not exist or you do not have permission". I am a full sysadmin on the server so would expect to have rights to do anything.

I've tried this with both DROP STATS and DROP INDEX statements but both give me the same error.

Has anyone deleted these before and is there a trick I'm missing?


Addendum

Poking around in this, I just noticed that if I R-Click on the object, both the 'Script As' and 'DELETE' options are greyed out.

SA permissions issues with many nested objects

Posted: 20 Jul 2013 05:32 PM PDT

I have a broker application that's relatively complicated.

Today, after I made some changes, I started getting the error:

The server principal 'sa' is not able to access the database 'XYZ' under the current security context.

The whole scenario up to the point of the error is:

(In Database ABC)

  • A broker message gets submitted into a queue
  • The queue reader picks it up
  • The queue reader updates a table to indicate the work is starting
  • This table has an update trigger on it. The trigger
    • Checks a table in database XYZ to make sure the inserted value for this field is valid
    • The table is accessed via a synonym

The check in the trigger I believe is what is causing the issue.

If I run the update manually, it works fine. I have also used EXECUTE AS to run the update manually as sa which works fine.

Other relevant facts:

  • sa is the owner of both database ABC and database XYZ
  • there's no funny business with the sa account - it's db_owner role in both DBs as well

Is there some sort of strange scoping happening because all this is running in the context of broker?

Updates

Some more info:

  • DB ownership chaining is on at the server level but not in the DBs. I turned it on and it made no difference.
  • Using a three part name instead of a synonym didn't make a difference
  • Profiler trace shows that the statements are running as SA and not another login
  • Setting both databases TRUSTWORTHY to ON didn't make a difference
  • If I run the queue activation proc manually, it processes correctly (under my credentials).

How should I set up my social network database design?

Posted: 20 Jul 2013 08:31 AM PDT

I am designing a db for a Social Network type website where users enter lot of information varying from family member details, education, employment, personal favorite such as TV, movie, music, food, books etc using InnoDB and expect exponentially increasing writes but far few reads. I already have 26 tables.
My question is it better to have large number of individual tables for example as TV, movie, music, food, books or put them under one big table as MyPersonal Favorite to reduce the number of tables as I fear that there will be 26 individual disk I/Os to write one persons information with my current design

Error "#1118 - row size too large" on the first row of the table only

Posted: 20 Jul 2013 02:30 AM PDT

I ran into a problem using a mySQL database. I have some columns as type text but when I try to enter data into the first row I get the error code "#1118 - Row size too large. The maximum row size for the used table type, not counting BLOBs, is 8126. You have to change some columns to TEXT or BLOBs" for anything with more than 15 characters. This only occurs for the first row, all the other rows in the column and table work just fine. Help!

Field Type    Collation              Null   Default  Extra    pid   int(11)                        No     None     AUTO_INCREMENT               3     text    utf8_general_ci        Yes    NULL                                  6     text    utf8_general_ci        Yes    NULL                                  7     text    utf8_general_ci        Yes    NULL                                  8     text    utf8_general_ci        Yes    NULL                                  9     text    utf8_general_ci        Yes    NULL                                  10    text    utf8_general_ci        Yes    NULL                                  21    text    utf8_general_ci        Yes    NULL                                  22    text    utf8_general_ci        Yes    NULL                                  23    text    utf8_general_ci        Yes    NULL                                  24    text    utf8_general_ci        Yes    NULL                                  25    text    utf8_general_ci        Yes    NULL                                  26    text    utf8_general_ci        Yes    NULL                                  27    text    utf8_general_ci        Yes    NULL                                  28    text    utf8_general_ci        Yes    NULL                                  29    text    utf8_general_ci        Yes    NULL                                  30    text    utf8_general_ci        Yes    NULL                                  31    text    utf8_general_ci        Yes    NULL                                  32    text    utf8_general_ci        Yes    NULL                                  33    text    utf8_general_ci        Yes    NULL                                  34    text    utf8_general_ci        Yes    NULL                                  35    text    utf8_general_ci        Yes    NULL                                  36    text    utf8_general_ci        Yes    NULL                                  37    text    utf8_general_ci        Yes    NULL                                  38    text    utf8_general_ci        Yes    NULL                                  39    text    utf8_general_ci        Yes    NULL                                  40    text    utf8_general_ci        Yes    NULL                                  ... 41-59 follow  

Indexes:

Keyname Type    Unique  Packed  Field   Cardinality Collation   Null      PRIMARY BTREE   Yes     No      pid     61          A         

Server Version: 5.5.9

Database user specified as a definer

Posted: 20 Jul 2013 10:31 AM PDT

I have a view in my database. problem is below

Error SQL query:

SELECT *   FROM `lumiin_crm_prod`.`v_contact`   LIMIT 1 ;  

MySQL said:

1449 - The user specified as a definer ('lumicrm'@'%') does not exist

i Google for a solution

User is created for Host & not for Global.

How to create the User for Global ????

Problem in Fetching Result set in Desired Format

Posted: 20 Jul 2013 04:31 AM PDT

I have 2 tables having the following columns

Person  ------  person_id,  first_name,   middle_name,  last_name    Items  -----  person_item_id,  person_id,  item_type,  status,  ordered_date  

The query is

select       Person.person_id as PID,      Group_concat(Items.item_type) AS ITYPE,      Group_concat(Items.status) AS STATUS,      Group_concat(Items.orderd_date)AS ODATE,      Group_concat(Items.person_item_id) AS IID   from       Persons inner join       Items ON Person.person_id = Items.person_id   group by person_id;  

The resultset returned is as follows

PID ITYPE      STATUS                ODATE                     IID    1   A,A,B,C    Y,N,Y,Y         2/5/2012,5/5/2012,17/5/2012     1,1,2  2   A,B        Y,N             5/5/2012,15/6/2012              1,2  

One Person can have many item_type and can order same item many times,I want to show the record set in such a manner that no matter the item is purchased by person or not , if the item is present in the record set i am drawing a column and have to show the result corresponding to that as such if the item is purchase 3 times then i want to show the item and it's corresponding record i.e order_date,status etc in sequential manner.

For eg :-

If there are 3 items A, B,C then output should be shown on the screen as :-

  PID           A           B                 C    1             Y           N                 Y            2/5/2012      5/5/2012         7/5/2012             -----------                        Y            17/5/2012           2             Y           N            12/6/2012    15/6/2012    

Now Suppose there is another item so what i am doing is first fetching the result set from above query and then traversing through that result set to check the existence of that item & if that item is present i am showing it & it's corresponding values , so 1st i have to traverse the whole result set to get the item_type and then have to show it .

Best practice for tempdb log file

Posted: 20 Jul 2013 12:30 AM PDT

I have read many blogs here and there about how to configure tempdb data files but i didnt found any information regarding the tempdb log file.

Here's the strategy that I'm presently using with my tempdb:

  • I have used the recommendations of Paul Randal on how to split my tempdb data files
  • I have set the size of my tempdb data files to their maximum and disabled autogrowth. For example, i have 100gb of free disk space and set the size of my 8 tempdb data files to 10gb each. This prevent fragmentation on the disk as recommended by Brent Ozar and also I have 20gb free for my log file.

But like I said, nobody is talking about the tempdb log file. What should I do with it? On my setup, this file is at the same place as the tempdb data files. What is the size and the autogrowth value that I should use with the tempdb log file?

How do I execute an Oracle SQL script without sqlplus hanging on me?

Posted: 20 Jul 2013 03:32 PM PDT

For an automated task I would very much like to run some SQL scripts and make sure that sqlplus does not hang under any circumstancees, i.e.:

  • If the script contains any named substitution variable that has to be entered by the user, sqlplus should return with an error instead of prompting for the variable -- I cannot use set define off, as some of these scripts need to accept command line parameters that have to be resolved with &1
  • The script must not "hang" when it doesn't contain an exit; at the end.

    Solved: I think now that I can achieve this by wrapping the sql-script in a secondary "caller script". I.e., the caller script calls the other script with @ and then has a fixed exit; after that. The other script doesn't need an exit that way.

  • Anything else: If it would require a prompt, it should return with an error.

How can i do this with Oracle (and sqlplus or something else)?

Database schema design help needed

Posted: 20 Jul 2013 09:31 AM PDT

I am developing a PHP application expecting millions of records both parent and children. My goal is to design an optimized database design to achieve high speed and performance.

This application will have 1M users, each user will have their own dashboard where they can create their own users/roles, pages and posts.

I am thinking about two possible solutions

  1. Use only 1 table for all users, pages and posts and they will have a foreign key of the owner
  2. Use separate tables for all users, like user1_pages, user1_posts, user1_users

I Think with #1 the query will be slow as all users will be sharing one table, second one also don't seems a perfect solution as number of tables will increase exponentially.

Please suggest me a best possible solution and share tips to design an optimized MySQL schema.

Object name 'Clean Up History' in use when trying to complete the maintenance plan wizard in SQL Server 2008 R2

Posted: 20 Jul 2013 03:30 AM PDT

I am trying to create a maintenance plan on a instance running SQL Server 2008 R2 SP1 (no CU's installed).

When completing the wizard I get the following error:

The object name cannot be changed from "{1E1746D3-B671-4799-8F61-7EE6117257C2}" to "Clean Up History" because another object in the collection already uses that name. Use a different name to resolve this error. ({1E1746D3-B671-4799-8F61-7EE6117257C2})

I've checked both [msdb].[dbo].[sysjobsteps] and [msdb].[dbo].[sysjobs] and neither contain the name "Clean Up History".

The server is being backed up by Microsoft DPM. Could it be that DPM inserts a cleanup job somewhere that I don't see?

Second time query execution using different constants makes faster?

Posted: 20 Jul 2013 12:31 PM PDT

Can someone explain or direct me how execution on indexes happen with different constants at intervals in Mysql. I notice only for the first execution on the table it takes time, after that with different constants it executes the query very quickly. I would like to know how to execute the query in such a way that it should take same amount of time every time it executes with different constants, is there a way to set some parameter off / on?

Query executed time : 9 mins.

mysql>  EXPLAIN SELECT     chargetype,    COUNT(br.`id`),   SUM(br.`charge`)  FROM  billingreport AS br  WHERE     br.`addeddate` BETWEEN '2013-02-01 00:00:00'    AND '2013-02-28 23:59:59'  AND br.`status` = 'success'    AND br.`idvendor` = 10     GROUP BY chargetype \G     *************************** 1. row ***************************         id: 1  select_type: SIMPLE      table: br       type: index_merge   possible_keys: NewIndex3,NewIndex6,idx_br_status        key: NewIndex3,idx_br_status     key_len: 4,1        ref: NULL       rows: 2887152      Extra: Using intersect(NewIndex3,idx_br_status); Using where; Using temporary; Using filesort  1 row in set (0.00 sec)  

Query executed time : 18 Secs.

 mysql>  EXPLAIN SELECT     chargetype,    COUNT(br.`id`),   SUM(br.`charge`)     FROM     billingreport AS br  WHERE     br.`addeddate` BETWEEN '2013-01-01 00:00:00'    AND    '2013-01-31 23:59:59'    AND br.`status` = 'success'    AND br.`idvendor` = 10  GROUP BY chargetype \G  *************************** 1. row ***************************         id: 1    select_type: SIMPLE      table: br       type: index_merge   possible_keys: NewIndex3,NewIndex6,idx_br_status        key: NewIndex3,idx_br_status    key_len: 4,1        ref: NULL       rows: 3004089      Extra: Using intersect(NewIndex3,idx_br_status); Using where; Using temporary; Using filesort   1 row in set (0.01 sec)  

SSRS appears to be ignoring Permissions set using Report Manager

Posted: 20 Jul 2013 06:32 PM PDT

I have setup SSRS on SQL Server 2008 in native mode.

As an administrator I can login to report manager, upload reports and run them, and also use the Web Service URL to generate reports.

I have also created a local user on the machine, I went into Report Manager as Admin, and at the top level set permissions that should assign the local user to all roles.

When I login to the machine as that user, and then navigate to Report Manager I just get the heading for the page, but do not see any of the folders that are configured.

I've checked and the folders are set to inherit parent permissions and they are showing the newly created local user in there too.

It seems odd that I have set the permissions, yet SSRS is still not showing what I should be able to see. Is there another step I need to take other than configuring the permissions in Report Manager?

When logged in as the newly created local user:

Report Manager - Shows the heading for the page, but no folders/items    Web Service URL (http://machine/ReportServer) - rsAccessDeniedError  

Privileges needed for Oracle Text

Posted: 20 Jul 2013 01:35 PM PDT

I'm new to Oracle Text and I am using it to support a search feature in an application I have written. The everything seems to be working correctly except I'm not happy with the fact that I had to store a bunch of things in the ctxsys schema in order to get around insufficient privilege issues. (This is with Oracle 10gR2).

Here is the SQL script I have to set up Oracle Text for my application

-- Since the CTXSYS schema was created when Oracle Text was installed  -- it does not yet have permission to select on certain tables it needs to  -- for the below procedure  grant select on dmg.table1 to CTXSYS;  grant select on dmg.table2 to CTXSYS;  grant select on dmg.table3 to CTXSYS;  grant select on dmg.table4 to CTXSYS;  grant select on dmg.table5 to CTXSYS;    create or replace procedure ctxsys.attr_indexing_procedure (      rid  in              rowid,      tlob in out NOCOPY   clob )  is      begin      -- This procedure queries the above five tables to extract text and combine it into a single document placed into tlob  end;  /    begin      ctx_ddl.create_preference('dmg.my_datastore', 'user_datastore' );        ctx_ddl.set_attribute( 'dmg.my_datastore', 'procedure', 'CTXSYS.attr_indexing_procedure' );  end;  /    begin      ctx_ddl.create_preference( 'dmg.my_index_lexer', 'BASIC_LEXER' );      ctx_ddl.set_attribute( 'dmg.my_index_lexer', 'base_letter', 'YES');  end;  /    begin     ctx_ddl.create_preference('dmg.MY_STEM_FUZZY_PREF', 'BASIC_WORDLIST');     ctx_ddl.set_attribute('dmg.MY_STEM_FUZZY_PREF','STEMMER','ENGLISH');    ctx_ddl.set_attribute('dmg.MY_STEM_FUZZY_PREF','FUZZY_MATCH','ENGLISH');    ctx_ddl.set_attribute('dmg.MY_STEM_FUZZY_PREF','FUZZY_SCORE','0');    ctx_ddl.set_attribute('dmg.MY_STEM_FUZZY_PREF','FUZZY_NUMRESULTS','5000');    ctx_ddl.set_attribute('dmg.MY_STEM_FUZZY_PREF','SUBSTRING_INDEX','TRUE');    ctx_ddl.set_attribute('dmg.MY_STEM_FUZZY_PREF','PREFIX_INDEX','TRUE');    ctx_ddl.set_attribute('dmg.MY_STEM_FUZZY_PREF','WILDCARD_MAXTERMS','5000');  end;  /    -- I would prefer that this index be owned by dmg but it looks like that would require the create any table privilege on dmg (because oracle text  -- will create tables in the ctxsys schema when creating the index). Therefore, ctxsys will own the text index.  create index ctxsys.ix_doi_attr on table1(column1) indextype is ctxsys.context parameters( 'datastore dmg.my_datastore lexer dmg.my_index_lexer wordlist dmg.DOI_STEM_FUZZY_PREF');    -- Schedule the index to sync every 4 hours  declare    job_num number;    nlsvar varchar2(4000);    envvar raw(32);  begin    select nls_env,misc_env into nlsvar,envvar from dba_jobs where rownum<2 and nls_env is not null and misc_env is not null;    select max(job)+1 into job_num from dba_jobs;    sys.dbms_ijob.submit(job=>job_num, luser=>'CTXSYS', puser=>'CTXSYS', cuser=>'CTXSYS', what=>'ctx_ddl.sync_index(''ctxsys.ix_doi_attr'');', next_date=>sysdate+1/1440,                    interval=>'SYSDATE+240/1440', broken=>false, nlsenv=>nlsvar, env=>envvar);    commit;    dbms_output.put_line('job '||job_num||' has been submitted.');  end;  /  

This script is intended to be run by my personal user, which has the DBA role. All the relevant tables are owned by the "DMG" schema, which has very limited privileges.

As you can see I am using a user_datastore to aggregate text from multiple tables into a single document. Also, I could only makes this work if the indexing procedure, the index itself and the dmbs_job to sync the index periodically were all owned by ctxsys, which seems like a bad idea. Preferably, I would like everything to be owned by the DMG schema.

The biggest issue I ran into was in the create index statement. It would always fail with

SQL Error: ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine  ORA-20000: Oracle Text error:  DRG-50857: oracle error in drvxtab.create_index_tables  ORA-01031: insufficient privileges  ORA-06512: at "CTXSYS.DRUE", line 160  ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 364  29855. 00000 -  "error occurred in the execution of ODCIINDEXCREATE routine"  *Cause:    Failed to successfully execute the ODCIIndexCreate routine.  *Action:   Check to see if the routine has been coded correctly.  

After some digging I figured out that the create index statement was attempting to create some tables under the ctxsys schema called DR$IX_DOI_ATTR$I, DR$IX_DOI_ATTR$K,DR$IX_DOI_ATTR$N, DR$IX_DOI_ATTR$P, and DR$IX_DOI_ATTR$R. I found that the DMG schema could create the index if I gave it the CREATE ANY TABLE privilege, but that is not something I want to do in production.

How can I make everything owned by the DMG schema (i.e. minimum privileges I need to grant the DMG schema)? Is there a way for me to remove the grants on the dmg tables to ctxsys?

EDIT:

I was originally incorrect in saying that giving the DMG schema the CREATE ANY TABLE privilege works. It instead changes the error message to this:

SQL Error: ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine  ORA-20000: Oracle Text error:  DRG-50857: oracle error in drvxtab.create_index_tables  ORA-01536: space quota exceeded for tablespace 'DMG'  ORA-06512: at "CTXSYS.DRUE", line 160  ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 364  29855. 00000 -  "error occurred in the execution of ODCIINDEXCREATE routine"  *Cause:    Failed to successfully execute the ODCIIndexCreate routine.  *Action:   Check to see if the routine has been coded correctly.  

This is a very strange message because there are NO quotas setup anywhere on this database. I have no idea how it is hitting up against a quota limit.

Linked Server Query Results: OPENQUERY vs Distributed Transactions

Posted: 19 Jul 2013 11:30 PM PDT

I have two databases. They are on different servers. One is named REPORTDB and the other is named TRANSDB. TRANSDB is referenced as a linked server on REPORTDB named TRANS.

I execute a query like this from REPORTDB using Four Part Naming and Distributed Transactions:

SET @val = ''  SET @DBName = 'TRANSDB'  SELECT @val = @val + 'CREATE SYNONYM ' + [name] + ' for ' + @DBName + '.dbo.'       + [name] + CHAR(10) + CHAR(13)   FROM TRANS.db.sys.tables;  

I expect the result of this query to generate a CREATE SYNONYM statement for each table in TRANSDB inside my REPORTDB (Table1, Table2, Table3). However, it only creates one statement and that is the last table that is returned in the result set from TRANSDB (Table3).

This correctly returns three CREATE SYNONYM statements for Table1, Table2, and Table3 using OPENQUERY:.

SET @val = ''  SET @DBName = 'TRANSDB'  SELECT @val = @val + 'CREATE SYNONYM + [name] + ' for ' + @DBName + '.dbo.'       + [name] + CHAR(10) + CHAR(13)   FROM OPENQUERY( TRANS, ' select [name] FROM  db.sys.tables')  

Since I do not wish to use openquery, how can I get the DISTRIBUTED TRANSACTION using four part naming to return results correctly?

Merge Replication Subquery Filters

Posted: 20 Jul 2013 05:19 AM PDT

I am using merge replication with SQL 2012. I have a number of articles in my publication and will have to come up with quite an elaborate set of filters.

If I add a filter on my article it can contain a sub query, but I noticed it doesn't pick up on changes properly.

For instance Table1, and Table2.

Filter on Table1 is

select * from Table1 where id in (select id from Table2)  

If more records are added to Table2 I would want the contents of Table1 to be changed accordingly when the sync process next occurs.

I found out that this is not working. Is it because when I tried it Table2 was not an article in my merge replication publication, or is it because merge replication doesn't support subqueries like this and recognising when a filter has changed.

What it would mean is that the sync process would have to scan each filter, and work out what tables the subquery relates to and figure out whether these have changed. Does it do something like this?

UPDATE:

I have tried a number of things, but they don't work.

1) Subquery - doesn't work. Here it confirms that this won't work      http://support.microsoft.com/kb/324362  2) Subquery calling a function - doesn't work  3) Subquery calling a function which takes a guid parameter which I change every time - doesn't work  

So the only other option mentioned is a view, but I cannot get the view to work either.

The first problem is that if I replicate a view it just gets replicated as a view and the underlying table needs to be there. But Simon I read your other post and that is not what you are saying.

I think what you are saying is that I should be able to select a table in my replication publication, and then filter it based on a view like this,

select cols from vwUserFilter  

But when I look in management studio I see a filter for my table which looks like this,

SELECT <published_columns> FROM [dbo].[Berm] WHERE   

And that part of my filter is fixed, I cannot edit [dbo].[berm] and replace it with a view.

So what I think you are suggesting is that I can change the above to,

SELECT <published_columns> FROM [dbo].[BermView]  

And the BermView is a view which filters the Berm table. Is that right, and if so how?

[MS SQL Server] Can't reclaim space on log files

[MS SQL Server] Can't reclaim space on log files


Can't reclaim space on log files

Posted: 20 Jul 2013 05:16 AM PDT

I have a separate drive just for my log files that is 40GB in size. The total size of all my logs are 1.5 GB, yet I only have 500MB free left on the drive. My biggest log file is only 550MB. I have tried a shrink and truncate only. How can I tell which log isn't releasing the space? I have had this same setup on this server for over 2 years and this is the first time I have had this issue. Any input is greatly appreciated.

[SQL 2012] White space

[SQL 2012] White space


White space

Posted: 19 Jul 2013 12:43 PM PDT

How do I edit the white space out of a table or created an edited table I imported from Excel?

Create table from select distinct.

Posted: 19 Jul 2013 01:02 PM PDT

I've tried a tested select distinct.I'm imputing the results from this query from one column into a table that I created that has two columns; a Pk column and the field to populate from the select distinct query. The error message points to and int value, which only the Pk column has.

Add Full text component

Posted: 19 Jul 2013 03:47 AM PDT

I have an instance of SQL server currently running and I would like to add the Full Text component. I know the steps involved with adding that, but I was wondering if this can be done on the fly or if it will require a restart of SQL. Thanks for the help in advance

[T-SQL] Query performance

[T-SQL] Query performance


Query performance

Posted: 19 Jul 2013 06:13 AM PDT

Hi, When I add the below line of code to my stored procedure, the performance becoems very poor. Is there any other way I can do i ?AND CustNum = CASE WHEN ISNULL(@CustNum,0) = 0 THEN CustNum ELSE @CustNum ENDThanks,PSB

IF ELSE Query

Posted: 19 Jul 2013 06:35 AM PDT

I have written an IF ELSE statement into a sproc but I'd like to display the results (as in counts) from the statement in the "Results" pane after the sproc runs. How do I accomplish this?IF EXISTS ({SELECT Query}) BEGIN {INSERT Version 1} ENDELSE BEGIN {INSERT version 2} ENDThx.John

Record Being Duplicated In Join Statement

Posted: 19 Jul 2013 03:02 PM PDT

In the statement below, there is a problem where some records are occasionally duplicated. I have not been able to figure out where my problem is. When I complete the query below, it returns the correct number of records without any duplicates.[code="sql"]SELECT COUNT(*) AS Expr1FROM salesExport AS SAL LEFT OUTER JOIN tipTransactionPivot AS PIV ON SAL.rvcID = PIV.rvcID AND SAL.microsEmployeeNumber = PIV.microsEmployeeNumber AND SAL.businessDate = PIV.businessDate WHERE (SAL.businessDate = '7/10/2013')[/code]When I add the EMP table, it duplicates two of my records.[code="sql"]SELECT COUNT (*)FROM employee AS EMP1 RIGHT OUTER JOIN salesExport AS SAL ON EMP1.microsEmployeeNumber = SAL.microsEmployeeNumber LEFT OUTER JOIN tipTransactionPivot AS PIV ON SAL.rvcID = PIV.rvcID AND SAL.microsEmployeeNumber = PIV.microsEmployeeNumber AND SAL.businessDate = PIV.businessDateWHERE SAL.businessDate = '7/10/2013'[/code]I've tried a couple different approaches, but keep breaking something else...hopefully its just tired eyes. Any help is appreciated.(BTW, the actual query is not a count statement, but this returns the same problem and is significantly shorter.)

PIVOT-ing solution needed for diary viewer

Posted: 19 Jul 2013 03:48 AM PDT

Hi all,I have a table that stores "register" information, e.g. UserID, RegisterDate, Available/Unavailable, etc. but would like to display this on a month-by-month view for all members. Essentially, it would be showing the previous/current/next month name at the top, then for each member an entire list of dates within that month with a corresponding tick or cross to denote whether they were present or not.Here's my code so far:[code="sql"]CREATE TABLE #Register ( EntryID INTEGER IDENTITY(1,1) PRIMARY KEY ,EntryDate DATETIME ,EntryStatus VARCHAR(10) ,AssociatedUserID INTEGER)CREATE TABLE #Users ( UserID INTEGER ,Username VARCHAR(30))INSERT INTO #Users(UserID, Username)SELECT 1, 'Agent Smith'UNIONSELECT 2, 'The Oracle'UNIONSELECT 3, 'Neo'INSERT INTO #Register(EntryDate,EntryStatus,AssociatedUserID)SELECT '2013-07-12','X',1UNIONSELECT '2013-07-12','X',2UNIONSELECT '2013-07-12','0',3UNIONSELECT '2013-07-13','0',1UNIONSELECT '2013-07-13','0',2UNIONSELECT '2013-07-13','X',3UNIONSELECT '2013-07-14','X',1UNIONSELECT '2013-07-14','0',2UNIONSELECT '2013-07-14','0',3UNIONSELECT '2013-07-15','0',1UNIONSELECT '2013-07-15','0',2UNIONSELECT '2013-07-15','0',3UNIONSELECT '2013-07-16','X',1UNIONSELECT '2013-07-16','X',2UNIONSELECT '2013-07-16','0',3UNIONSELECT '2013-07-17','0',1UNIONSELECT '2013-07-17','X',2UNIONSELECT '2013-07-17','0',3UNIONSELECT '2013-07-18','0',1UNIONSELECT '2013-07-18','0',2UNIONSELECT '2013-07-18','0',3DECLARE @p_ActiveDate DATETIMEDECLARE @ActiveMonth INTEGER ,@ActiveYear INTEGERSELECT @ActiveMonth = MONTH(ISNULL(@p_ActiveDate,GETDATE())) ,@ActiveYear = YEAR(ISNULL(@p_ActiveDate,GETDATE())) CREATE TABLE #MonthDays ( MonthID INTEGER ,DayCount INTEGER ,[MonthName] VARCHAR(10) ,AbbrMonthName VARCHAR(3))INSERT INTO #MonthDays(MonthID, DayCount,[MonthName],AbbrMonthName)SELECT 1,31, 'January','Jan'UNIONSELECT 2, CASE WHEN @ActiveYear % 4 = 0 THEN 28 ELSE 29 END,'February','Feb'UNIONSELECT 3,31,'March','Mar'UNIONSELECT 4,30,'April','Apr'UNIONSELECT 5,31,'May','May'UNIONSELECT 6,30,'June','Jun'UNIONSELECT 7,31,'July','Jul'UNIONSELECT 8,31,'August','Aug'UNIONSELECT 9,30,'September','Sep'UNIONSELECT 10,31,'October','Oct'UNIONSELECT 11,30,'November','Nov'UNIONSELECT 12,31,'December','Dec' ;WITH cteDates (CalendarDate, DayCount, MonthID)AS ( SELECT CONVERT(DATE,CONVERT(VARCHAR(4),@ActiveYear) + '-' + CASE WHEN @ActiveMonth < 10 THEN '0' ELSE '' END + CONVERT(VARCHAR(2),@ActiveMonth) + '-' + '01') ,md.DayCount ,md.MonthID FROM #MonthDays AS md WHERE md.MonthID = @ActiveMonth UNION ALL SELECT DATEADD(DAY,1,c.CalendarDate), c.DayCount,c.MonthID FROM cteDates c INNER JOIN #MonthDays AS md2 ON( md2.MonthID = c.MonthID) WHERE DATEADD(DAY,1,c.CalendarDate) <= DATEADD(DAY,md2.DayCount-1,CONVERT(DATE,CONVERT(VARCHAR(4),@ActiveYear) + '-' + CASE WHEN @ActiveMonth < 10 THEN '0' ELSE '' END + CONVERT(VARCHAR(2),@ActiveMonth) + '-' + '01')))-- get the list of CalendarDates into a table variableSELECT *INTO #CalendarDatesFROM cteDatesDROP TABLE #RegisterDROP TABLE #UsersDROP TABLE #MonthDaysDROP TABLE #CalendarDates[/code]Now I have all the calendar dates, and subsequent registry events, I'd like to PIVOT it so that I have something like this: 1 2 3 4 5 6 7 8 9 10 ....-------------------------------------------------------------------------------------Name X 0 0 X 0 X 0 X 0 0...but unfortunately don't know how! As always, any help is gratefully received!Thanks,Kevin.

Parsing XML using SQL Server 2008

Posted: 19 Jul 2013 08:09 AM PDT

[code]DECLARE @X XML = <Movies> <Movie Name = "Titanic"> <Genres> <Genre Name="Romance"/> <Genre Name="Tragedy"/> </Genres> <Rate Value = "10"/> </Movie> <Movie Name = "ABC"> <Rate Value="15"/> </Movie> </Movies> [/code] My expected output is [code]Movie Genre ValueTitanic Romance 10Titanic Tragedy 10ABC NULL 15[/code]I hope you understood my requirement. I was able to do this when all the XML tags are present. If you have noticed, the GENRE tags for ABC movie are missing and my code is ignoring that movie. Can anyone help me on this please.Thanks in advance.

MSSQL OPENQUERY to ORACLE server

Posted: 19 Jul 2013 07:59 AM PDT

Hello.I wonder if anybody has an explanation for this:I have a MSSQL server, a database in it, and some tables. I made a openquery to get data from a remote Oracle database that I am trying to combine with some data on this server. Everything works fine, my question is about behavior. If my openquery has more than 3 WHERE clauses, it returns only one record.Like in WHERE field1='A' AND Filed2='B' AND filed3='C' is ok and WHERE field1='A' AND Filed2='B' AND filed3='C' AND 1=1 returns only one record.Thank you

How to add column and then sum

Posted: 19 Jul 2013 03:38 AM PDT

[img]http://social.msdn.microsoft.com/Forums/getfile/310366[/img]I have this same statement, can't seem to figure this out. How can I SUM the 'current amount' column by Month? I would like add an additional column and then populate it with the summed value of ' current amount' by month. So for example - January would have a summed total, Feb, etc.[code="plain"]select * from (SELECT distinct pehPErcontrol,case left(substring(pehPErcontrol,5,len(pehPErcontrol)),2) when '01' then 'January' when '02' then 'February' when '03' then 'March' when '04' then 'April' when '05' then 'May' when '06' then 'June' when '07' then 'July' when '08' then 'August' when '09' then 'September' when '10' then 'October' when '11' then 'November' when '12' then 'December' end as [UltiMonth],rtrim(eepNameLast) + ', ' + rtrim(eepNameFirst) + ' ' + coalesce(substring(eepNameMiddle,1,1) + '.', '') as Name, eepNameLast AS [Last Name],IsNull(eepNameSuffix,'') AS [Suffix],eepNameFirst AS [First Name],IsNull(eepNameMiddle,'') AS [Middle Name],pehCurAmt AS [Current Amount], pehCurHrs AS [Current Hours], pehCoID AS [Company ID], pehEEID AS [EE ID], pehEmpNo AS [Emp No], pehLocation AS [Location], pehJobCode AS [Job Code], pehOrgLvl1 AS [Org Level 1], pehOrgLvl2 AS [Org Level 2], pehOrgLvl3 AS [Org Level 3], pehOrgLvl4 AS [Org Level 4], pehPayGroup AS [Pay Group], pehProject AS [Project], pehShfShiftAmt AS [Shift Amount],pehearncode AS [Earn Code],pehIsVoided AS [IS Voided],pehIsVoidingRecord AS [Voiding Record],pehIsOvertime AS [Is Overtime]FROM EmpPers JOIN pearhist ph ON ph.pehEEID = eepEEID join WSISQL4.DASHBOARD.DBO.[OVERTIME_BUDGET_2013] ON [orglevel] = pehOrgLvl2) t right outer join WSISQL4.DASHBOARD.DBO.[OVERTIME_BUDGET_2013] ob on t.[UltiMonth] = ob.[month] and orglevel = [org level 2]where pehPerControl > '201301011' AND [EARN CODE] = '0002'AND [IS Voided] <> 'Y'AND [Voiding Record] <> 'Y' AND [Is Overtime] = 'Y' [/code]

same variable in stored proc

Posted: 19 Jul 2013 06:25 AM PDT

hiin my stored proc i have 3-4 while loop and i need to use different variable for iti m having my code under begin and end. so is there any way i can use same variable name for different while loop.

[SQL Server 2008 issues] Help with SQL query

[SQL Server 2008 issues] Help with SQL query


Help with SQL query

Posted: 19 Jul 2013 05:00 PM PDT

Hello all, I am not sure whether this is the right thread to post this or not.Anyways, I have one table as follows:table mastertab(op INT,msisdn VARCHAR (12),imei VARCHAR (20));Sample values that I have are as follows:aa, 0191, 111222333aa, 0191, 111222444aa, 0192, 111222333aa, 0192, 111222444aa, 0192, 111222555aa, 0193, 111222333bb, 0171, 222222333bb, 0171, 222222444bb, 0172, 222222444cc, 0152, 333222444Now, i want the output to show as (op, imei_count) set where imei_count displays the total number of duplicate imei and groups it by op. Expected output is as follows:op imei_count-- ----------aa 2bb 1cc 0Any help in this regard will highly be appreciated.Thanks,Mehfuz

Tempdb issue

Posted: 07 Jul 2013 04:42 PM PDT

we are facing issue in tempdb file size , initial size we allocate 2 GB , but after restart sql server , size would be 1 MB , its gone 2 GB, plz anyone help , How size will keeping on 2 GB after restart sql server ..Any settings ? ThanksJerry

Where is the beginners page, i need help?

Posted: 19 Jul 2013 03:26 AM PDT

I need advice, would be straight forward for most of you. i have a 1 week old .bak file. Then, hdd failure came. i re installed the same sql (2008), restored ok. The connecting application provides error ole80040E14. I assume, the db is corrupt, and i need to repair the mdf file with a tool, expensive. Database 1.3 gb, but containing gold. Any resopnse?

SQL Broker Performance Question

Posted: 19 Jul 2013 02:09 AM PDT

I am having some performance issues when a stored procedure is executed going through a broker-based request. The SP is doing a simple join-based update. Sometimes it will update 1M rows in under 5 minutes.When it is bad the same update for 300K rows will run for 6 hours.There appear to be no other processes competing for resources on the server. CPU usage remains constant during this time at about 30-35 %. SQL Wait Stats during the execution time are high for OLEDB, BROKER_EVENTHANDLER & BROKER_RECEIVE_WAITFOR. Any ideas greatly appreciated.

Need A Help in DATA MASKING in SQL SERVER 2008

Posted: 19 Jul 2013 01:44 AM PDT

Hello Friends,I want to mask certain fields in employee_bank_account_info table, which are very sentive information.I searched for it , i found verious third-party tools online, which I can not use in my current enviornment.I need help to find out some way of data masking via sql script or use of any inbuilt function or library in sql server 2008!!!Please give me any suggestion or example about this.thanks in advance.

SSIS package creation

Posted: 19 Jul 2013 06:04 AM PDT

Hi,I need to import data from Oracle table to SQL Server table using SSIS.I'm using data flow task with OLEDB source, Data Conversion and OLEDB destinationHere is the table script for Oracle table. Can I have equivalent SQL Server Table script for this Oracle Table? CREATE TABLE AMC.POLICY( POLICY NUMBER NOT NULL, AGENCY NUMBER, WRITING_COMPANY NUMBER, NEW_WRITING_COMPANY NUMBER, DEC_ADDR NUMBER, POLICY_PREFIX NUMBER, NAMED_INSURED NUMBER, INSPECTOR NUMBER, RENEWED_AS NUMBER, NEW_WRITING_DATE DATE, RENEWED_FROM NUMBER, NEW_AGENCY NUMBER, NEW_AGENCY_DATE DATE, QUOTE_FLAG NUMBER(38) DEFAULT 0, PRODUCER VARCHAR2(80 BYTE), TERM_TYPE VARCHAR2(80 BYTE), CURRENT_DEC NUMBER, TERM_MONTHS NUMBER(38), CURRENT_DEC_DATE DATE, TERM_EFFECTIVE_DATE DATE, UMBRELLA VARCHAR2(80 BYTE), AGENCY_CODE NUMBER(38), TERM_EXPIRATION_DATE DATE, LEGAL_TEXT VARCHAR2(80 BYTE), POLICY_NBR VARCHAR2(80 BYTE), EMPLOYEE_FLAG NUMBER(38) DEFAULT 0, INCEPTION_DATE DATE, QUOTE_SUFFIX VARCHAR2(80 BYTE), ENDORSE_MODE NUMBER(38) DEFAULT 1, QUOTE_REASON VARCHAR2(40 BYTE), POLICY_SEARCH_NBR VARCHAR2(80 BYTE), APP_CHECK VARCHAR2(80 BYTE), AUDIT_FREQ NUMBER(38), TERM_NBR NUMBER(38) DEFAULT 1 NOT NULL, REMITTER VARCHAR2(40 BYTE), AUDIT_FORM VARCHAR2(40 BYTE), AUDIT_DATE DATE, MVR_REQUEST_DATE DATE, POLICY_STATUS VARCHAR2(40 BYTE), APP_REMITTER VARCHAR2(40 BYTE), POLICY_STATUS_DATE DATE, PYMT_PLAN VARCHAR2(40 BYTE), ACTIVITY_STATUS VARCHAR2(40 BYTE), APP_CASH_AMT FLOAT(126), ACTIVITY_STATUS_DATE DATE, APP_CHECK_KEY NUMBER, CANCEL_WAIT_DAYS NUMBER(38), CANCEL_EFFECTIVE_DATE DATE, APP_INSURED_NAME VARCHAR2(80 BYTE), CANCEL_REQUEST_BY VARCHAR2(40 BYTE), RISK_GROSS_AMT FLOAT(126), CANCEL_CHECK_SENT VARCHAR2(40 BYTE), CANCEL_INSPECTION NUMBER(38) DEFAULT 0, REINSTATED_EFFECTIVE_DATE DATE, NONRENEW_WAIT_DAYS NUMBER(38), NON_PAY_NOTICE NUMBER(38) DEFAULT 0, CANCEL_NOTICE_DATE DATE, NONRENEW_NOTICE_DATE DATE, NONRENEW_INSPECTION NUMBER(38) DEFAULT 0, CANCEL_PREM_DATE DATE, EXTENSION_DATE DATE, CANCEL_ACTUAL_DATE DATE, REISSUE_FEE NUMBER(38) DEFAULT 0, CREATE_ID VARCHAR2(30 BYTE) DEFAULT USER, CANCEL_NSF_AMT FLOAT(126), FIRST_MODIFIED DATE DEFAULT SYSDATE, AUDIT_ID VARCHAR2(30 BYTE) DEFAULT USER NOT NULL, LAST_MODIFIED DATE DEFAULT SYSDATE NOT NULL, REINSTATED_REASON VARCHAR2(40 BYTE), NON_PAY_COUNT NUMBER(38), BILL_TYPE VARCHAR2(40 BYTE), LAST_BILL_DATE DATE, NEXT_BILL_DATE DATE, NONRENEW_EFFECTIVE_DATE DATE, BILL_DUE_DATE DATE, CURRENT_GROSS_OS FLOAT(126), CURRENT_NET_OS FLOAT(126), COMMISSION_PCT FLOAT(126), CURRENT_INFORCE FLOAT(126), CARRY_DATE DATE, PREVIOUS_CARRIER VARCHAR2(40 BYTE), REFERRED_BY VARCHAR2(80 BYTE), CURRENT_MIN_DUE FLOAT(126), RENEWAL_CERT_COUNT NUMBER(38), FACULTATIVE_FLAG NUMBER(38) DEFAULT 0, WRITING_CODE NUMBER(38), BRANCH_CODE NUMBER(38), PRINT_CANCEL NUMBER(38) DEFAULT 0, PRINT_NONRENEW NUMBER(38) DEFAULT 0, PRINT_REINSTATE NUMBER(38) DEFAULT 0, PRINT_NR_RESCIND NUMBER(38) DEFAULT 0, PRINT_AGENCY NUMBER(38) DEFAULT 0, BILL_STATUS VARCHAR2(40 BYTE), BILL_STATUS_DATE DATE, NOTICE_STATUS VARCHAR2(40 BYTE), NOTICE_STATUS_DATE DATE, CANCEL_COUNT NUMBER(38), CANCEL_NOTICE_COUNT NUMBER(38), FULL_TERM NUMBER(38) DEFAULT 0, NEW_AGENCY_DEC_FLAG NUMBER(38), BUSINESS_LINE NUMBER, AGENCY_EXTENSION NUMBER(38), LEGAL_TEXT_LONG VARCHAR2(256 BYTE), HOLD_NOTICE_DATE DATE, PRINT_CANCEL_NP NUMBER(38) DEFAULT 0, PMWRITING_COMPANY NUMBER, QUOTE_STATUS VARCHAR2(40 BYTE), INFLATION_GUARD_PCT FLOAT(126), PRINT_DECLINATION NUMBER(38) DEFAULT 0, CANCEL_FUNC_DEPT VARCHAR2(80 BYTE), NONRENEW_FUNC_DEPT VARCHAR2(80 BYTE), CANCEL_TYPE VARCHAR2(40 BYTE), NONRENEW_TYPE VARCHAR2(40 BYTE), PARENT_AGENCY_KEY NUMBER, INFLATION_GUARD VARCHAR2(40 BYTE), DEC_FUNC_DEPT VARCHAR2(80 BYTE), SCANLINE VARCHAR2(254 BYTE), VIEW_PARENT_AGENCY NUMBER(38) DEFAULT 0 NOT NULL, CANCEL_SYSTEM NUMBER(38) DEFAULT 0, IS_SPECIAL_BILL NUMBER(1) DEFAULT 0, UMBRELLA_KEY NUMBER, REVERSE_REISSUE_FEE NUMBER DEFAULT 0, CONV_IN_THIS_TERM NUMBER DEFAULT 0, RENEWAL_CR_OVERRIDE_FLAG NUMBER, APP_AGENCY NUMBER, ACCOUNT NUMBER, CONTROL NUMBER, SPECIALTY_PGM NUMBER, ONLINE_REF_NUMBER NUMBER, GLOBAL_TERR_CLEARANCE NUMBER, GTC_ACTION NUMBER, LOSS_FREE_CREDIT NUMBER, ENTERED_BY VARCHAR2(80 BYTE), ENDORSE_QUOTE_INFO NUMBER, STP_FLAG NUMBER(10) DEFAULT 0, MVR_STATUS VARCHAR2(40 BYTE), APP_CASH_TYPE VARCHAR2(80 BYTE))________________________________________________________________________________________________________________[b]In the OLEDB Destination editior, I'm getting the below query when I click to create new table. Can I use this as is or do I need to modify? Because it's creating the columns twice with copy. Why columns are getting created twice?[i][/i][/b]CREATE TABLE [OLE DB Destination] ( [POLICY] nvarchar(38), [AGENCY] nvarchar(38), [WRITING_COMPANY] nvarchar(38), [NEW_WRITING_COMPANY] nvarchar(38), [DEC_ADDR] nvarchar(38), [POLICY_PREFIX] nvarchar(38), [NAMED_INSURED] nvarchar(38), [INSPECTOR] nvarchar(38), [RENEWED_AS] nvarchar(38), [NEW_WRITING_DATE] datetime, [RENEWED_FROM] nvarchar(38), [NEW_AGENCY] nvarchar(38), [NEW_AGENCY_DATE] datetime, [QUOTE_FLAG] numeric(38,0), [PRODUCER] varchar(80), [TERM_TYPE] varchar(80), [CURRENT_DEC] nvarchar(38), [TERM_MONTHS] numeric(38,0), [CURRENT_DEC_DATE] datetime, [TERM_EFFECTIVE_DATE] datetime, [UMBRELLA] varchar(80), [AGENCY_CODE] numeric(38,0), [TERM_EXPIRATION_DATE] datetime, [LEGAL_TEXT] varchar(80), [POLICY_NBR] varchar(80), [EMPLOYEE_FLAG] numeric(38,0), [INCEPTION_DATE] datetime, [QUOTE_SUFFIX] varchar(80), [ENDORSE_MODE] numeric(38,0), [QUOTE_REASON] varchar(40), [POLICY_SEARCH_NBR] varchar(80), [APP_CHECK] varchar(80), [AUDIT_FREQ] numeric(38,0), [TERM_NBR] numeric(38,0), [REMITTER] varchar(40), [AUDIT_FORM] varchar(40), [AUDIT_DATE] datetime, [MVR_REQUEST_DATE] datetime, [POLICY_STATUS] varchar(40), [APP_REMITTER] varchar(40), [POLICY_STATUS_DATE] datetime, [PYMT_PLAN] varchar(40), [ACTIVITY_STATUS] varchar(40), [APP_CASH_AMT] float, [ACTIVITY_STATUS_DATE] datetime, [APP_CHECK_KEY] nvarchar(38), [CANCEL_WAIT_DAYS] numeric(38,0), [CANCEL_EFFECTIVE_DATE] datetime, [APP_INSURED_NAME] varchar(80), [CANCEL_REQUEST_BY] varchar(40), [RISK_GROSS_AMT] float, [CANCEL_CHECK_SENT] varchar(40), [CANCEL_INSPECTION] numeric(38,0), [REINSTATED_EFFECTIVE_DATE] datetime, [NONRENEW_WAIT_DAYS] numeric(38,0), [NON_PAY_NOTICE] numeric(38,0), [CANCEL_NOTICE_DATE] datetime, [NONRENEW_NOTICE_DATE] datetime, [NONRENEW_INSPECTION] numeric(38,0), [CANCEL_PREM_DATE] datetime, [EXTENSION_DATE] datetime, [CANCEL_ACTUAL_DATE] datetime, [REISSUE_FEE] numeric(38,0), [CREATE_ID] varchar(30), [CANCEL_NSF_AMT] float, [FIRST_MODIFIED] datetime, [AUDIT_ID] varchar(30), [LAST_MODIFIED] datetime, [REINSTATED_REASON] varchar(40), [NON_PAY_COUNT] numeric(38,0), [BILL_TYPE] varchar(40), [LAST_BILL_DATE] datetime, [NEXT_BILL_DATE] datetime, [NONRENEW_EFFECTIVE_DATE] datetime, [BILL_DUE_DATE] datetime, [CURRENT_GROSS_OS] float, [CURRENT_NET_OS] float, [COMMISSION_PCT] float, [CURRENT_INFORCE] float, [CARRY_DATE] datetime, [PREVIOUS_CARRIER] varchar(40), [REFERRED_BY] varchar(80), [CURRENT_MIN_DUE] float, [RENEWAL_CERT_COUNT] numeric(38,0), [FACULTATIVE_FLAG] numeric(38,0), [WRITING_CODE] numeric(38,0), [BRANCH_CODE] numeric(38,0), [PRINT_CANCEL] numeric(38,0), [PRINT_NONRENEW] numeric(38,0), [PRINT_REINSTATE] numeric(38,0), [PRINT_NR_RESCIND] numeric(38,0), [PRINT_AGENCY] numeric(38,0), [BILL_STATUS] varchar(40), [BILL_STATUS_DATE] datetime, [NOTICE_STATUS] varchar(40), [NOTICE_STATUS_DATE] datetime, [CANCEL_COUNT] numeric(38,0), [CANCEL_NOTICE_COUNT] numeric(38,0), [FULL_TERM] numeric(38,0), [NEW_AGENCY_DEC_FLAG] numeric(38,0), [BUSINESS_LINE] nvarchar(38), [AGENCY_EXTENSION] numeric(38,0), [LEGAL_TEXT_LONG] varchar(256), [HOLD_NOTICE_DATE] datetime, [PRINT_CANCEL_NP] numeric(38,0), [PMWRITING_COMPANY] nvarchar(38), [QUOTE_STATUS] varchar(40), [INFLATION_GUARD_PCT] float, [PRINT_DECLINATION] numeric(38,0), [CANCEL_FUNC_DEPT] varchar(80), [NONRENEW_FUNC_DEPT] varchar(80), [CANCEL_TYPE] varchar(40), [NONRENEW_TYPE] varchar(40), [PARENT_AGENCY_KEY] nvarchar(38), [INFLATION_GUARD] varchar(40), [DEC_FUNC_DEPT] varchar(80), [SCANLINE] varchar(254), [VIEW_PARENT_AGENCY] numeric(38,0), [CANCEL_SYSTEM] numeric(38,0), [IS_SPECIAL_BILL] numeric(1,0), [UMBRELLA_KEY] nvarchar(38), [REVERSE_REISSUE_FEE] nvarchar(38), [CONV_IN_THIS_TERM] nvarchar(38), [RENEWAL_CR_OVERRIDE_FLAG] nvarchar(38), [APP_AGENCY] nvarchar(38), [ACCOUNT] nvarchar(38), [CONTROL] nvarchar(38), [SPECIALTY_PGM] nvarchar(38), [ONLINE_REF_NUMBER] nvarchar(38), [GLOBAL_TERR_CLEARANCE] nvarchar(38), [GTC_ACTION] nvarchar(38), [LOSS_FREE_CREDIT] nvarchar(38), [ENTERED_BY] varchar(80), [ENDORSE_QUOTE_INFO] nvarchar(38), [STP_FLAG] numeric(10,0), [MVR_STATUS] varchar(40), [APP_CASH_TYPE] varchar(80), [Copy of POLICY] nvarchar(38), [Copy of AGENCY] nvarchar(38), [Copy of WRITING_COMPANY] nvarchar(38), [Copy of NEW_WRITING_COMPANY] nvarchar(38), [Copy of DEC_ADDR] nvarchar(38), [Copy of POLICY_PREFIX] nvarchar(38), [Copy of NAMED_INSURED] nvarchar(38), [Copy of INSPECTOR] nvarchar(38), [Copy of RENEWED_AS] nvarchar(38), [Copy of NEW_WRITING_DATE] datetime, [Copy of RENEWED_FROM] nvarchar(38), [Copy of NEW_AGENCY] nvarchar(38), [Copy of NEW_AGENCY_DATE] datetime, [Copy of QUOTE_FLAG] numeric(38,0), [Copy of PRODUCER] varchar(80), [Copy of TERM_TYPE] varchar(80), [Copy of CURRENT_DEC] nvarchar(38), [Copy of TERM_MONTHS] numeric(38,0), [Copy of CURRENT_DEC_DATE] datetime, [Copy of TERM_EFFECTIVE_DATE] datetime, [Copy of UMBRELLA] varchar(80), [Copy of AGENCY_CODE] numeric(38,0), [Copy of TERM_EXPIRATION_DATE] datetime, [Copy of LEGAL_TEXT] varchar(80), [Copy of POLICY_NBR] varchar(80), [Copy of EMPLOYEE_FLAG] numeric(38,0), [Copy of INCEPTION_DATE] datetime, [Copy of QUOTE_SUFFIX] varchar(80), [Copy of ENDORSE_MODE] numeric(38,0), [Copy of QUOTE_REASON] varchar(40), [Copy of POLICY_SEARCH_NBR] varchar(80), [Copy of APP_CHECK] varchar(80), [Copy of AUDIT_FREQ] numeric(38,0), [Copy of TERM_NBR] numeric(38,0), [Copy of REMITTER] varchar(40), [Copy of AUDIT_FORM] varchar(40), [Copy of AUDIT_DATE] datetime, [Copy of MVR_REQUEST_DATE] datetime, [Copy of POLICY_STATUS] varchar(40), [Copy of APP_REMITTER] varchar(40), [Copy of POLICY_STATUS_DATE] datetime, [Copy of PYMT_PLAN] varchar(40), [Copy of ACTIVITY_STATUS] varchar(40), [Copy of APP_CASH_AMT] float, [Copy of ACTIVITY_STATUS_DATE] datetime, [Copy of APP_CHECK_KEY] nvarchar(38), [Copy of CANCEL_WAIT_DAYS] numeric(38,0), [Copy of CANCEL_EFFECTIVE_DATE] datetime, [Copy of APP_INSURED_NAME] varchar(80), [Copy of CANCEL_REQUEST_BY] varchar(40), [Copy of RISK_GROSS_AMT] float, [Copy of CANCEL_CHECK_SENT] varchar(40), [Copy of CANCEL_INSPECTION] numeric(38,0), [Copy of REINSTATED_EFFECTIVE_DATE] datetime, [Copy of NONRENEW_WAIT_DAYS] numeric(38,0), [Copy of NON_PAY_NOTICE] numeric(38,0), [Copy of CANCEL_NOTICE_DATE] datetime, [Copy of NONRENEW_NOTICE_DATE] datetime, [Copy of NONRENEW_INSPECTION] numeric(38,0), [Copy of CANCEL_PREM_DATE] datetime, [Copy of EXTENSION_DATE] datetime, [Copy of CANCEL_ACTUAL_DATE] datetime, [Copy of REISSUE_FEE] numeric(38,0), [Copy of CREATE_ID] varchar(30), [Copy of CANCEL_NSF_AMT] float, [Copy of FIRST_MODIFIED] datetime, [Copy of AUDIT_ID] varchar(30), [Copy of LAST_MODIFIED] datetime, [Copy of REINSTATED_REASON] varchar(40), [Copy of NON_PAY_COUNT] numeric(38,0), [Copy of BILL_TYPE] varchar(40), [Copy of LAST_BILL_DATE] datetime, [Copy of NEXT_BILL_DATE] datetime, [Copy of NONRENEW_EFFECTIVE_DATE] datetime, [Copy of BILL_DUE_DATE] datetime, [Copy of CURRENT_GROSS_OS] float, [Copy of CURRENT_NET_OS] float, [Copy of COMMISSION_PCT] float, [Copy of CURRENT_INFORCE] float, [Copy of CARRY_DATE] datetime, [Copy of PREVIOUS_CARRIER] varchar(40), [Copy of REFERRED_BY] varchar(80), [Copy of CURRENT_MIN_DUE] float, [Copy of RENEWAL_CERT_COUNT] numeric(38,0), [Copy of FACULTATIVE_FLAG] numeric(38,0), [Copy of WRITING_CODE] numeric(38,0), [Copy of BRANCH_CODE] numeric(38,0), [Copy of PRINT_CANCEL] numeric(38,0), [Copy of PRINT_NONRENEW] numeric(38,0), [Copy of PRINT_REINSTATE] numeric(38,0), [Copy of PRINT_NR_RESCIND] numeric(38,0), [Copy of PRINT_AGENCY] numeric(38,0), [Copy of BILL_STATUS] varchar(40), [Copy of BILL_STATUS_DATE] datetime, [Copy of NOTICE_STATUS] varchar(40), [Copy of NOTICE_STATUS_DATE] datetime, [Copy of CANCEL_COUNT] numeric(38,0), [Copy of CANCEL_NOTICE_COUNT] numeric(38,0), [Copy of FULL_TERM] numeric(38,0), [Copy of NEW_AGENCY_DEC_FLAG] numeric(38,0), [Copy of BUSINESS_LINE] nvarchar(38), [Copy of AGENCY_EXTENSION] numeric(38,0), [Copy of LEGAL_TEXT_LONG] varchar(256), [Copy of HOLD_NOTICE_DATE] datetime, [Copy of PRINT_CANCEL_NP] numeric(38,0), [Copy of PMWRITING_COMPANY] nvarchar(38), [Copy of QUOTE_STATUS] varchar(40), [Copy of INFLATION_GUARD_PCT] float, [Copy of PRINT_DECLINATION] numeric(38,0), [Copy of CANCEL_FUNC_DEPT] varchar(80), [Copy of NONRENEW_FUNC_DEPT] varchar(80), [Copy of CANCEL_TYPE] varchar(40), [Copy of NONRENEW_TYPE] varchar(40), [Copy of PARENT_AGENCY_KEY] nvarchar(38), [Copy of INFLATION_GUARD] varchar(40), [Copy of DEC_FUNC_DEPT] varchar(80), [Copy of SCANLINE] varchar(254), [Copy of VIEW_PARENT_AGENCY] numeric(38,0), [Copy of CANCEL_SYSTEM] numeric(38,0), [Copy of IS_SPECIAL_BILL] numeric(1,0), [Copy of UMBRELLA_KEY] nvarchar(38), [Copy of REVERSE_REISSUE_FEE] nvarchar(38), [Copy of CONV_IN_THIS_TERM] nvarchar(38), [Copy of RENEWAL_CR_OVERRIDE_FLAG] nvarchar(38), [Copy of APP_AGENCY] nvarchar(38), [Copy of ACCOUNT] nvarchar(38), [Copy of CONTROL] nvarchar(38), [Copy of SPECIALTY_PGM] nvarchar(38), [Copy of ONLINE_REF_NUMBER] nvarchar(38), [Copy of GLOBAL_TERR_CLEARANCE] nvarchar(38), [Copy of GTC_ACTION] nvarchar(38), [Copy of LOSS_FREE_CREDIT] nvarchar(38), [Copy of ENTERED_BY] varchar(80), [Copy of ENDORSE_QUOTE_INFO] nvarchar(38), [Copy of STP_FLAG] numeric(10,0), [Copy of MVR_STATUS] varchar(40), [Copy of APP_CASH_TYPE] varchar(80))

mssql split column string into rows

Posted: 19 Jul 2013 05:15 AM PDT

I have select resultsCAGE_CODE BUS_TYPE_STRING6UC62 2U.V26UHK6 2X.A5.VW.XS6U4M3 2U.V26U7L8 23.27.2X.8E.8W.A2.PI.VW6U4X4 2X.VW6UJU9 2U.LJ.V26U6T9 2X.VW.XS6U9K1 2X.LJ.VW5NFQ4 27.2X.8E.8W.A2.HQ.LJ.VW5R4D8 27.2X.8E.8W.A2.HQ.LJ.VWand I want to split BUS_TYPE_STRING into row like 6UC62 2U6UC62 V26UHK6 2X6UHK6 A56UHK6 VW6UHK6 XS....What is the bestway to achieve this?

Double Hop Question

Posted: 19 Jul 2013 04:50 AM PDT

If I use my laptop to connect to server A using SSMS and run a query there that joins to a table on server B, I get the known response, "Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'." If I remote desktop to server A and run the same query using SSMS, it exec's successfully (it's only "single-hop"). If I then run the original query on the laptop again, it also exec's successfully, I think because the Kerberos authentication is somehow allowing me to run queries for a limited time from my laptop.Here's the question: server B is in a different domain than server A and the network folks don't want to / can't get both trusted domains and the Active Directory configurations working needed just to solve the double-hop problem, so I'm left looking for a work-around. I have a small set of users. I'd like to allow them remote desktop access to server A and when they log in, have a sqlcmd run in a batch script that just connects to server B. Then the Kerberos ticket should allow them to run queries from their laptops. When I try it myself: - remote desktop to server A- open a cmd prompt- run sqlcmd -S serverB -d Db1 -q "SELECT top 1 ID from dbo.company;" [runs successfully]- go back to my laptop and run the original query, I get "Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'." again.Please help.Thanks,Seth

MERGE statement - WHEN NOT MATCHED DELETE takes ages

Posted: 19 Jul 2013 01:58 AM PDT

I am using the MERGE functionality in SQL 2008.My code works fine. However if I clean the buffers I've noticed that the WHEN NOT MATCHED DELETE statement adds an extra 8 seconds onto by sub 1 second query even if there is nothing to delete Is this usually the case with the MERGE statement?Thanks

Help on finding the total space used by a database schema

Posted: 19 Jul 2013 02:06 AM PDT

I am using SQL server 2008 R2 and I would like to find the total space used by just the schema of the database, not the data.Thanks

Reporting Services scheduling- hourly between 8 and 5 impossible?

Posted: 04 Jun 2012 05:14 AM PDT

Trying to get a report to run daily between certain hours. I can set a start time and an "end date" in Report Manager, but as far as I can tell, I can't say run hourly every day from X to Y. am I missing something? If this isn't doable from RS, can I just find the associated job in the Agent and change the schedule of that job?

creating a non sequential incremental number field

Posted: 18 Jul 2013 08:57 PM PDT

Client Funding Total Commissioned Value Payment Frequency1 1 100 4 weeks2 2 158 4 weeks3 3 254 4 weeks4 4 36 4 weeks5 5 21 4 weeksMy solution to the problem of creating a funding stream is to create a new table with two variable and one fixed data columns. Increment – This is determined by the Payment Frequency value within the Funding Details tableDate – This shows the days on which funding payments are received. It is determined by adding the Increment value to the Start Date for each record.Income – This is a fixed value taken directly from the Funding Details table (Total Commissioned Value).Client 1 Funding StreamIncrement Date Income1 01/04/2013 10028 29/04/2013 10055 26/05/2013 10082 22/06/2013 100109 19/07/2013 100136 15/08/2013 100163 11/09/2013 100190 08/10/2013 100217 04/11/2013 100244 01/12/2013 100271 28/12/2013 100298 24/01/2014 100325 20/02/2014 100352 19/03/2014 100379 15/04/2014 100IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[FundingReportTable]') AND type in (N'U'))DROP TABLE [dbo].[FundingReportTable];declare @date datetime = '20100101'CREATE TABLE [DBO].[FundingStream](Id_num INT IDENTITY(1,28),Date DATETIME NOT NULL,Income NUMERICAL (5,2))INSERT INTO [FundingStream](Date,Income)SELECT([SELECT dateadd(dd,id_num,'20100101') FROM master..spt_values WHERE dateadd(dd,id_num,'20100101')<dateadd(yy,1,@date)],[FundingDetails].[Total Commissioned Value])FROM FundingDetails;When this code is run I get the following error message ….Microsoft Server error 102.can anyone help?

Database defragmentation and autogrowth settings

Posted: 19 Jul 2013 01:21 AM PDT

We do have some maintenance plan for our sql server 2008 r2 express.Every month we do defragment of the database if any table has page count more tahn 50 for any table and average fragmentation more than 20.If the database log size>2 MB, then the recovery mode is made as simple, and it is shrinked, and the recovery mode is set back to FULL.If the Page_count>50 and avg_fragmentation_in_percent > 30 then the index is REBUILD.And if Page_count>50 and avg_fragmentation_in_percent > 5 and <30 then the index is REORGANIZE.This is what we are doing till now.But we found that autogrowth events are resource incentive and it should not happen repeatedly. Now for all database autogrowth is set to MB for mdf file and 10% for ldf file which is default value while creating new database.We are planning to increase the autogrowth values for the database depending on how much database is getting bigger every day.But i want to know how much autogrowth events is ideal for the database.Should i set autogroth so that it happens only once a day,week or month etc.So please help me to set the autogrowth value for my database.Also there is another problem.If i do monthly defragmentation of database then it will be shrinked. So after this for all database for which i did shrink autogrowth occurs once when new data is written to it.So there will be so many autogrowth events. So whether it will be a problem?Please tell me a solution.

Alerts for Database Mirroring

Posted: 15 Jul 2013 01:30 AM PDT

HiI'm trying to get SQL Server to alert me via email when the Primary instance loses connection with Mirrored instance (i.e. mirroring becomes suspended/inactive)So far, the methods I've found require the Service Broker to be enabled on the Mirrored DB, unfortunately, this cannot be enabled in this scenario.Is there any other simple way that I can simply set MSSQL to notify me (via email) when Mirroring isn't active?Thanks.

How to take backup of Single table and how to restore?

Posted: 13 Jul 2013 03:44 PM PDT

Hi... How to take backup of Single table and how to restore? is there any query like database backup?shivakumar...

Search This Blog