Saturday, July 20, 2013

[how to] I've backlogged queries on my SQL Server - what now?

[how to] I've backlogged queries on my SQL Server - what now?


I've backlogged queries on my SQL Server - what now?

Posted: 20 Jul 2013 05:07 PM PDT

So, I'm a bit of a plonker right...

By accident, I issued in the region of 500,000 individual queries against my DB. It's been running for 6 hours now. I restart the server, they continue to insert. I need them to stop - How can I do this?

How to use group by to select only group row in these case scenarios

Posted: 20 Jul 2013 08:22 PM PDT

I'm learning SQL for the first time and I was given these queries to do .

I wrote those statements to solve the problem and they work, but they don't use group by.

Can somebody please suggest how I change this? They should only use group by and maybe where, I don't think this assignment is supposed to use anything more advanced.

What is the latest birth date of the presidents? (Youngest President)

SELECT birth, first_name, last_name   FROM sampdb.president   WHERE birth= (select MAX(birth) FROM sampdb.president);  +------------+------------+-----------+   | birth      | first_name | last_name |  +------------+------------+-----------+   | 1946-08-19 | William J. | Clinton   |  +------------+------------+-----------+   1 row in set (0.00 sec)    SELECT birth, first_name, last_name   FROM sampdb.president   WHERE birth>=ALL(SELECT MAX(birth) FROM sampdb.president);    +------------+------------+-----------+  | birth      | first_name | last_name |  +------------+------------+-----------+  | 1946-08-19 | William J. | Clinton   |  +------------+------------+-----------+  1 row in set (0.00 sec)  

The earliest? (Oldest president)

SELECT birth, first_name, last_name   FROM sampdb.president   WHERE birth=(SELECT MIN(birth) FROM sampdb.president);  +------------+------------+------------+  | birth      | first_name | last_name  |  +------------+------------+------------+  | 1732-02-22 | George     | Washington |  +------------+------------+------------+  1 row in set (0.00 sec)  

'ambiguous column name' error refers to database, table, and column

Posted: 20 Jul 2013 10:14 AM PDT

sqlite3 gives an 'ambiguous column name' error with the query below. The error itself references what seems to be about as an unambiguous column name as one can imagine, identifying the database as 'sourcedb', the table as 'answers', and the field as 'markup'. Why is this query triggering the error?

sqlite> SELECT answers.*          FROM   sourcedb.answers                 INNER JOIN sourcedb.questions_tests                   ON sourcedb.questions_tests.testskey = '121212eczema'                 INNER JOIN sourcedb.answers_questions                   ON sourcedb.questions_tests.questionskey = sourcedb.answers.questionskey                 INNER JOIN sourcedb.answers                   ON sourcedb.answers.answerskey = sourcedb.answers_questions.questionskey;   ...>    ...>    ...> Error: ambiguous column name: sourcedb.answers.markup  

Some additional detail...

sqlite> pragma table_info(answers);  0|markup|TEXT|0||0  1|identifier|TEXT|0||0  2|text|TEXT|0||0  3|answerskey|INTEGER|0||1  4|answertype|TEXT|0||0  5|ReadPerms||0||0  6|UpdatePerms||0||0  7|DeletePerms||0||0  sqlite> pragma database_list;  0|main|  2|sourcedb|/root/.ttest/database/sqlite/ttest-simple.sqlite  

Moving large databases

Posted: 20 Jul 2013 09:08 PM PDT

I have a centos server and /var/lib/mysql/ is 125GB (disk has 1GB free space).

Ordinarily I would use mysqldump to backup the databases, but I don't normally work with such large databases, so I need to know the safest way of copying the databases over to a new server.

All advice appreciated!

MySQL - password encryption type

Posted: 20 Jul 2013 09:13 AM PDT

I would like to confirm if MySQL uses SHA1 for 'PASSSWORD' hashing and why test below doesn't work:

mysql> GRANT USAGE ON *.* TO 'test'@'localhost' IDENTIFIED BY 'password';  Query OK, 0 rows affected, 1 warning (0.01 sec)    mysql> SELECT user,host,password FROM mysql.user WHERE user='test';  +------+------------+-------------------------------------------+  | user | host       | password                                  |  +------+------------+-------------------------------------------+  | test | localhost | *2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19 |  +------+------------+-------------------------------------------+  1 row in set (0.01 sec)    mysql> \q  Bye  [root@test ~]# echo -n "password" | sha1sum | awk '{print toupper($1)}'  5BAA61E4C9B93F3F0682250B6CF8331B7EE68FD8  

Databases not coming up in SQL Server 2008 R2

Posted: 20 Jul 2013 11:14 AM PDT

Here is something happened on my system that I just figured out.

None of the databases are showing up in Object Explorer window even the server has been connected successfully.

It is showing only one database that I have created exactly 1 year back.

How can I get those databases back, they are really important.

Impoving performance of query by using greater than on a primary key

Posted: 20 Jul 2013 06:05 AM PDT

I have a Reporting table where i store description

tableA    sno   | Project |name     | description      | mins |  1     | prjA    |nameA    |ABC -10% task done|  30  |  ...  3000  | prjA    |nameB    |ABC -70% task done|  70  |  

i want to query the description field and save in another table

tableB    id | valueStr | total_mins  | last_sno  1  |  ABC     | 100         | 3000  

if there is no entry in second table , i create a entry with default values

if there is and entry in second table , i update 2nd table , with the total_mins and increment the last_sno to that value say 3300 , so that the next time i query this table i get values from second table and based on the last_sno

Query    SELCT last_sno FROM tableB where valueStr ='ABC'  

the first 3 characters in the description field

SELECT max(sno), sum(mins) FROM tableA   where sno > last_sno and description like 'ABC%'  

Since the first table has million of rows so, i search the first table with sno > last_sno , so that should help performance right ?

but the explain shows that it scans the same no of rows , when i query the first table from the first sno

PostgreSQL: Group By primary key or DISTINCT increase query time over 1000x with limit

Posted: 19 Jul 2013 11:23 PM PDT

Also see http://stackoverflow.com/questions/17741167/hibernate-jpa-improve-performance-of-distinct-query but I realized this is mainly a PostgreSQL issue.

My application uses a 3rd party extension to PostgreSQL to for searching chemical structures. This is in general slow. I can not change the SQL directly as the application uses hibernate and native query is not an option.

I have a many-to-many relationship and the "Link-table" has an additional column. Basically I have a Mixture that consists of elements and an element occurs in the mixture at a certain percentage. favorite

I use Spring-Data JPA with QueryDSL, hibernate and PostgreSQL. I have a query with 2 Joins It's a many too many with a link-table that has additional columns. Bascially I have a Mixture that consists of elements and an element occurs in the mixture at a certain percentage.

I'm now searching all Mixtures that contain an element matching the given criteria. Because a mixture can have multiple elements that match the criteria, the query may return the same entity multiple times. I want to prevent that hence DISTINCT or GROUP BY primary key.

The query is also paged meaning it uses limit and offset. The query runs perfectly fine without either distinct or group by but then I can get duplicate rows. If I add either group by or distinct query is over 1000 times slower.

Query with DISTINCT (note SQL from hibernate):

select distinct      simplecomp0_.chem_compound_id as chem1_0_,       --snipped about 10 more columns all short varchar or date fields      from simple_compound simplecomp0_       inner join compound_composition compositio1_ on simplecomp0_.chem_compound_id=compositio1_.chem_compound_id       inner join chemical_structure chemicalst2_ on compositio1_.chemical_structure_id=chemicalst2_.structure_id   where       chemicalst2_.structure_id  @ ('CCNc1ccccc1', '')::bingo.sub  limit 5   offset 5  

EXPLAIN ANALYZE with DISTINCT:

"Limit  (cost=5984.58..5984.63 rows=5 width=1645) (actual time=6342.541..6342.543 rows=5 loops=1)"  "  ->  HashAggregate  (cost=5984.53..5989.79 rows=526 width=1645) (actual time=6342.538..6342.542 rows=10 loops=1)"  "        ->  Nested Loop  (cost=0.00..5971.38 rows=526 width=1645) (actual time=7.289..6166.512 rows=128527 loops=1)"  "              ->  Nested Loop  (cost=0.00..4445.81 rows=526 width=8) (actual time=7.281..5694.663 rows=128527 loops=1)"  "                    ->  Index Scan using idx_chemical_structure on chemical_structure chemicalst2_  (cost=0.00..20.26 rows=526 width=8) (actual time=7.262..5013.620 rows=128508 loops=1)"  "                          Index Cond: (chemical_structure @ ROW('CCNc1ccccc1'::text, ''::text)::bingo.sub)"  "                    ->  Index Only Scan using compound_composition_pkey on compound_composition compositio1_  (cost=0.00..8.40 rows=1 width=16) (actual time=0.003..0.004 rows=1 loops=128508)"  "                          Index Cond: (chemical_structure_id = chemicalst2_.structure_id)"  "                          Heap Fetches: 128527"  "              ->  Index Scan using idx_pk on simple_compound simplecomp0_  (cost=0.00..2.89 rows=1 width=1645) (actual time=0.002..0.003 rows=1 loops=128527)"  "                    Index Cond: (chem_compound_id = compositio1_.chem_compound_id)"  "Total runtime: 6344.584 ms"  

also http://explain.depesz.com/s/ocC

The long time is caused by searching the 3rd party index for chemical structure search. For some reason the whole indexed is searched.

If the distinct is removed, limit and offset are correctly applied to the 3rd part index and query is fast:

EXPLAIN ANALYZE:

"Limit  (cost=56.76..113.52 rows=5 width=1645) (actual time=11.135..11.472 rows=5 loops=1)"  "  ->  Nested Loop  (cost=0.00..5971.38 rows=526 width=1645) (actual time=10.783..11.469 rows=10 loops=1)"  "        ->  Nested Loop  (cost=0.00..4445.81 rows=526 width=8) (actual time=10.774..11.406 rows=10 loops=1)"  "              ->  Index Scan using idx_chemical_structure on chemical_structure chemicalst2_  (cost=0.00..20.26 rows=526 width=8) (actual time=10.755..11.313 rows=10 loops=1)"  "                    Index Cond: (chemical_structure @ ROW('CCNc1ccccc1'::text, ''::text)::bingo.sub)"  "              ->  Index Only Scan using compound_composition_pkey on compound_composition compositio1_  (cost=0.00..8.40 rows=1 width=16) (actual time=0.006..0.006 rows=1 loops=10)"  "                    Index Cond: (chemical_structure_id = chemicalst2_.structure_id)"  "                    Heap Fetches: 10"  "        ->  Index Scan using simple_compound_pkey on simple_compound simplecomp0_  (cost=0.00..2.89 rows=1 width=1645) (actual time=0.004..0.004 rows=1 loops=10)"  "              Index Cond: (chem_compound_id = compositio1_.chem_compound_id)"  "Total runtime: 12.052 ms"  

Also http://explain.depesz.com/s/gU2

Is there anyway I can tune PostgreSQL to apply the 3rd party index only according to the limit and offset clause when using distinct?

EDIT:

After further thinking about the issue I came to the conclusion that there is no solution that I can implement. With group by or distinct the whole query obviously must be run regardless of limit clause. And if the whole query is run the 3rd party index must be used an that takes time (without that index such a search would take minutes not seconds).

Now about statistics. Here a quote from supplier:

The cost for structure index for operator @ is underestimated and hard coded, for Postres > to use index almost in all cases. Again, as I mentioned, the cost-estimation function is > not implemented yet (we'll let you know when it's ready).

Restore to last transaction log restore

Posted: 20 Jul 2013 03:20 PM PDT

We have a log shipping setup where we replicate from our primary database to a secondary. Now that database is, by the way of log shipping, in standby and thus read-only. We set it up so we could run reports from it, but it seems the reporting software used needs to go through the application server which needs to log onto the server - in this case, Microsoft Dynamics AX and Microsoft Dynamics ERP.

Our plan is to have a teritary database - this would also get log backups via log shipping, but in a different way - we would like to roll the database back to what it was at the last transaction log restore, then restore the latest log file(s). Is this possible, and how would we go about it?

EDIT 2013/07/21:

Allow me to rephrase my question. If I have a SQL database with full transaction logging, can I tell it to roll back all changes to a certain point, and if so, how may I accomplish this?

Thanks

SQL server agent SSIS error

Posted: 20 Jul 2013 01:30 AM PDT

I get the following error when I execute my package as a SQL server agent job.

It is an SSIS 2008 package running on a SQL Server 2008 instance. My package security is DontSaveSensitive.

I don't even know how to begin fixing this error.

Where should I check first?

Date        a value of time  Log     Job History (MyJob)    Step ID     1  Server      PCTSQL004  Job Name        MyJob  Step Name       Job_1  Duration        00:00:00  Sql Severity        0  Sql Message ID      0  Operator Emailed          Operator Net sent         Operator Paged        Retries Attempted       0    Message  Executed as user: CS\DmcSysManager. The process could not be created for step 1 of job 0x63BB5A86DB23F947866D2A806BE4CC6B (reason: A required privilege is not held by the client).  The step failed.  

Is this a good strategy for importing a large amount of data and decomposing as an ETL?

Posted: 20 Jul 2013 02:31 PM PDT

I have a set of five tables (a highly decomposed schema for an ETL if I understand the nomenclature) that I'm going to load via bulk import, then run some inserts from those five tables into a SLEW of other tables, including inserts that just rely on the values I just inserted into the first tables.

I can do my inserts as an A, B, C process, where I insert into the first table, then insert into some table S where exists in A + T (being some other table that has preloaded "configuration data"), then inserting into Z where exists in B + U, etc.

Should I be trying to batch those inserts with a cursor (I know, stone the traitor) or should I just run the raw insert into scripts and let the server die a thousand heat deaths? Basically I'm worried about starving the server or causing it to collapse from too many index or something else with inserts.

Should I stage out the inserts as:

  1. Insert into one set of tables
  2. Insert into secondary tables based on the first tables
  3. Insert into tertiary tables, etc.

OR should I insert into all the tales where the data is needed but do it via cursor, in a "for loop" style pattern of 100k rows at a time.

FWIW, this is a behavior I saw from the DBAs at my last job, so I figure that's "what I'm supposed to do" (the batch process via cursors) but maybe I don't understand enough about what they were doing (they were also live-loading into systems that already had data, and were loading new data afterwards).

Also bear in mind that I'm normally a C# dev, but I've got the most TSQL experience here and I'm trying to make the best process I can for raw-loading this data as opposed to our "current" method that is mostly webservice fetches and NHibernate save-commits.

Things I think are important to the question:

  1. There will be no other load on this server when I do this, I will have complete and total control of the box, and I'm the only user interested here (this is the initial data load before anyone else can do anything with the box)
  2. There are indexes on all the tables, where appropriate, and there are FKs and other predicates
  3. The entire db system is in use for slews of other clients right now, so I know the indexes are needed for operation, but if I should do something to disable those and re-enable them to make the inserts etc faster, I'm looking for those sorts of optimization hacks.
  4. I'm waiting on sample data but I expect the maximum records in a given table to be about a million rows, fairly wide, and that I'm going to insert into those other tables at most a million rows for the ancillary tables that can come secondary or tertiary.
  5. The hardware is "average".

Dropping Hypothetical Indexes

Posted: 20 Jul 2013 11:31 AM PDT

In the past I thought I'd deleted hypothetical indexes using either a DROP INDEX statement for clustered indexes and DROP STATISTICS statement for non-clustered indexes.

I have a database that is full of DTA remnants that I would like to cleanup; however, when I try to drop the object I always receive an error telling me that I cannot drop the object "because it does not exist or you do not have permission". I am a full sysadmin on the server so would expect to have rights to do anything.

I've tried this with both DROP STATS and DROP INDEX statements but both give me the same error.

Has anyone deleted these before and is there a trick I'm missing?


Addendum

Poking around in this, I just noticed that if I R-Click on the object, both the 'Script As' and 'DELETE' options are greyed out.

SA permissions issues with many nested objects

Posted: 20 Jul 2013 05:32 PM PDT

I have a broker application that's relatively complicated.

Today, after I made some changes, I started getting the error:

The server principal 'sa' is not able to access the database 'XYZ' under the current security context.

The whole scenario up to the point of the error is:

(In Database ABC)

  • A broker message gets submitted into a queue
  • The queue reader picks it up
  • The queue reader updates a table to indicate the work is starting
  • This table has an update trigger on it. The trigger
    • Checks a table in database XYZ to make sure the inserted value for this field is valid
    • The table is accessed via a synonym

The check in the trigger I believe is what is causing the issue.

If I run the update manually, it works fine. I have also used EXECUTE AS to run the update manually as sa which works fine.

Other relevant facts:

  • sa is the owner of both database ABC and database XYZ
  • there's no funny business with the sa account - it's db_owner role in both DBs as well

Is there some sort of strange scoping happening because all this is running in the context of broker?

Updates

Some more info:

  • DB ownership chaining is on at the server level but not in the DBs. I turned it on and it made no difference.
  • Using a three part name instead of a synonym didn't make a difference
  • Profiler trace shows that the statements are running as SA and not another login
  • Setting both databases TRUSTWORTHY to ON didn't make a difference
  • If I run the queue activation proc manually, it processes correctly (under my credentials).

How should I set up my social network database design?

Posted: 20 Jul 2013 08:31 AM PDT

I am designing a db for a Social Network type website where users enter lot of information varying from family member details, education, employment, personal favorite such as TV, movie, music, food, books etc using InnoDB and expect exponentially increasing writes but far few reads. I already have 26 tables.
My question is it better to have large number of individual tables for example as TV, movie, music, food, books or put them under one big table as MyPersonal Favorite to reduce the number of tables as I fear that there will be 26 individual disk I/Os to write one persons information with my current design

Error "#1118 - row size too large" on the first row of the table only

Posted: 20 Jul 2013 02:30 AM PDT

I ran into a problem using a mySQL database. I have some columns as type text but when I try to enter data into the first row I get the error code "#1118 - Row size too large. The maximum row size for the used table type, not counting BLOBs, is 8126. You have to change some columns to TEXT or BLOBs" for anything with more than 15 characters. This only occurs for the first row, all the other rows in the column and table work just fine. Help!

Field Type    Collation              Null   Default  Extra    pid   int(11)                        No     None     AUTO_INCREMENT               3     text    utf8_general_ci        Yes    NULL                                  6     text    utf8_general_ci        Yes    NULL                                  7     text    utf8_general_ci        Yes    NULL                                  8     text    utf8_general_ci        Yes    NULL                                  9     text    utf8_general_ci        Yes    NULL                                  10    text    utf8_general_ci        Yes    NULL                                  21    text    utf8_general_ci        Yes    NULL                                  22    text    utf8_general_ci        Yes    NULL                                  23    text    utf8_general_ci        Yes    NULL                                  24    text    utf8_general_ci        Yes    NULL                                  25    text    utf8_general_ci        Yes    NULL                                  26    text    utf8_general_ci        Yes    NULL                                  27    text    utf8_general_ci        Yes    NULL                                  28    text    utf8_general_ci        Yes    NULL                                  29    text    utf8_general_ci        Yes    NULL                                  30    text    utf8_general_ci        Yes    NULL                                  31    text    utf8_general_ci        Yes    NULL                                  32    text    utf8_general_ci        Yes    NULL                                  33    text    utf8_general_ci        Yes    NULL                                  34    text    utf8_general_ci        Yes    NULL                                  35    text    utf8_general_ci        Yes    NULL                                  36    text    utf8_general_ci        Yes    NULL                                  37    text    utf8_general_ci        Yes    NULL                                  38    text    utf8_general_ci        Yes    NULL                                  39    text    utf8_general_ci        Yes    NULL                                  40    text    utf8_general_ci        Yes    NULL                                  ... 41-59 follow  

Indexes:

Keyname Type    Unique  Packed  Field   Cardinality Collation   Null      PRIMARY BTREE   Yes     No      pid     61          A         

Server Version: 5.5.9

Database user specified as a definer

Posted: 20 Jul 2013 10:31 AM PDT

I have a view in my database. problem is below

Error SQL query:

SELECT *   FROM `lumiin_crm_prod`.`v_contact`   LIMIT 1 ;  

MySQL said:

1449 - The user specified as a definer ('lumicrm'@'%') does not exist

i Google for a solution

User is created for Host & not for Global.

How to create the User for Global ????

Problem in Fetching Result set in Desired Format

Posted: 20 Jul 2013 04:31 AM PDT

I have 2 tables having the following columns

Person  ------  person_id,  first_name,   middle_name,  last_name    Items  -----  person_item_id,  person_id,  item_type,  status,  ordered_date  

The query is

select       Person.person_id as PID,      Group_concat(Items.item_type) AS ITYPE,      Group_concat(Items.status) AS STATUS,      Group_concat(Items.orderd_date)AS ODATE,      Group_concat(Items.person_item_id) AS IID   from       Persons inner join       Items ON Person.person_id = Items.person_id   group by person_id;  

The resultset returned is as follows

PID ITYPE      STATUS                ODATE                     IID    1   A,A,B,C    Y,N,Y,Y         2/5/2012,5/5/2012,17/5/2012     1,1,2  2   A,B        Y,N             5/5/2012,15/6/2012              1,2  

One Person can have many item_type and can order same item many times,I want to show the record set in such a manner that no matter the item is purchased by person or not , if the item is present in the record set i am drawing a column and have to show the result corresponding to that as such if the item is purchase 3 times then i want to show the item and it's corresponding record i.e order_date,status etc in sequential manner.

For eg :-

If there are 3 items A, B,C then output should be shown on the screen as :-

  PID           A           B                 C    1             Y           N                 Y            2/5/2012      5/5/2012         7/5/2012             -----------                        Y            17/5/2012           2             Y           N            12/6/2012    15/6/2012    

Now Suppose there is another item so what i am doing is first fetching the result set from above query and then traversing through that result set to check the existence of that item & if that item is present i am showing it & it's corresponding values , so 1st i have to traverse the whole result set to get the item_type and then have to show it .

Best practice for tempdb log file

Posted: 20 Jul 2013 12:30 AM PDT

I have read many blogs here and there about how to configure tempdb data files but i didnt found any information regarding the tempdb log file.

Here's the strategy that I'm presently using with my tempdb:

  • I have used the recommendations of Paul Randal on how to split my tempdb data files
  • I have set the size of my tempdb data files to their maximum and disabled autogrowth. For example, i have 100gb of free disk space and set the size of my 8 tempdb data files to 10gb each. This prevent fragmentation on the disk as recommended by Brent Ozar and also I have 20gb free for my log file.

But like I said, nobody is talking about the tempdb log file. What should I do with it? On my setup, this file is at the same place as the tempdb data files. What is the size and the autogrowth value that I should use with the tempdb log file?

How do I execute an Oracle SQL script without sqlplus hanging on me?

Posted: 20 Jul 2013 03:32 PM PDT

For an automated task I would very much like to run some SQL scripts and make sure that sqlplus does not hang under any circumstancees, i.e.:

  • If the script contains any named substitution variable that has to be entered by the user, sqlplus should return with an error instead of prompting for the variable -- I cannot use set define off, as some of these scripts need to accept command line parameters that have to be resolved with &1
  • The script must not "hang" when it doesn't contain an exit; at the end.

    Solved: I think now that I can achieve this by wrapping the sql-script in a secondary "caller script". I.e., the caller script calls the other script with @ and then has a fixed exit; after that. The other script doesn't need an exit that way.

  • Anything else: If it would require a prompt, it should return with an error.

How can i do this with Oracle (and sqlplus or something else)?

Database schema design help needed

Posted: 20 Jul 2013 09:31 AM PDT

I am developing a PHP application expecting millions of records both parent and children. My goal is to design an optimized database design to achieve high speed and performance.

This application will have 1M users, each user will have their own dashboard where they can create their own users/roles, pages and posts.

I am thinking about two possible solutions

  1. Use only 1 table for all users, pages and posts and they will have a foreign key of the owner
  2. Use separate tables for all users, like user1_pages, user1_posts, user1_users

I Think with #1 the query will be slow as all users will be sharing one table, second one also don't seems a perfect solution as number of tables will increase exponentially.

Please suggest me a best possible solution and share tips to design an optimized MySQL schema.

Object name 'Clean Up History' in use when trying to complete the maintenance plan wizard in SQL Server 2008 R2

Posted: 20 Jul 2013 03:30 AM PDT

I am trying to create a maintenance plan on a instance running SQL Server 2008 R2 SP1 (no CU's installed).

When completing the wizard I get the following error:

The object name cannot be changed from "{1E1746D3-B671-4799-8F61-7EE6117257C2}" to "Clean Up History" because another object in the collection already uses that name. Use a different name to resolve this error. ({1E1746D3-B671-4799-8F61-7EE6117257C2})

I've checked both [msdb].[dbo].[sysjobsteps] and [msdb].[dbo].[sysjobs] and neither contain the name "Clean Up History".

The server is being backed up by Microsoft DPM. Could it be that DPM inserts a cleanup job somewhere that I don't see?

Second time query execution using different constants makes faster?

Posted: 20 Jul 2013 12:31 PM PDT

Can someone explain or direct me how execution on indexes happen with different constants at intervals in Mysql. I notice only for the first execution on the table it takes time, after that with different constants it executes the query very quickly. I would like to know how to execute the query in such a way that it should take same amount of time every time it executes with different constants, is there a way to set some parameter off / on?

Query executed time : 9 mins.

mysql>  EXPLAIN SELECT     chargetype,    COUNT(br.`id`),   SUM(br.`charge`)  FROM  billingreport AS br  WHERE     br.`addeddate` BETWEEN '2013-02-01 00:00:00'    AND '2013-02-28 23:59:59'  AND br.`status` = 'success'    AND br.`idvendor` = 10     GROUP BY chargetype \G     *************************** 1. row ***************************         id: 1  select_type: SIMPLE      table: br       type: index_merge   possible_keys: NewIndex3,NewIndex6,idx_br_status        key: NewIndex3,idx_br_status     key_len: 4,1        ref: NULL       rows: 2887152      Extra: Using intersect(NewIndex3,idx_br_status); Using where; Using temporary; Using filesort  1 row in set (0.00 sec)  

Query executed time : 18 Secs.

 mysql>  EXPLAIN SELECT     chargetype,    COUNT(br.`id`),   SUM(br.`charge`)     FROM     billingreport AS br  WHERE     br.`addeddate` BETWEEN '2013-01-01 00:00:00'    AND    '2013-01-31 23:59:59'    AND br.`status` = 'success'    AND br.`idvendor` = 10  GROUP BY chargetype \G  *************************** 1. row ***************************         id: 1    select_type: SIMPLE      table: br       type: index_merge   possible_keys: NewIndex3,NewIndex6,idx_br_status        key: NewIndex3,idx_br_status    key_len: 4,1        ref: NULL       rows: 3004089      Extra: Using intersect(NewIndex3,idx_br_status); Using where; Using temporary; Using filesort   1 row in set (0.01 sec)  

SSRS appears to be ignoring Permissions set using Report Manager

Posted: 20 Jul 2013 06:32 PM PDT

I have setup SSRS on SQL Server 2008 in native mode.

As an administrator I can login to report manager, upload reports and run them, and also use the Web Service URL to generate reports.

I have also created a local user on the machine, I went into Report Manager as Admin, and at the top level set permissions that should assign the local user to all roles.

When I login to the machine as that user, and then navigate to Report Manager I just get the heading for the page, but do not see any of the folders that are configured.

I've checked and the folders are set to inherit parent permissions and they are showing the newly created local user in there too.

It seems odd that I have set the permissions, yet SSRS is still not showing what I should be able to see. Is there another step I need to take other than configuring the permissions in Report Manager?

When logged in as the newly created local user:

Report Manager - Shows the heading for the page, but no folders/items    Web Service URL (http://machine/ReportServer) - rsAccessDeniedError  

Privileges needed for Oracle Text

Posted: 20 Jul 2013 01:35 PM PDT

I'm new to Oracle Text and I am using it to support a search feature in an application I have written. The everything seems to be working correctly except I'm not happy with the fact that I had to store a bunch of things in the ctxsys schema in order to get around insufficient privilege issues. (This is with Oracle 10gR2).

Here is the SQL script I have to set up Oracle Text for my application

-- Since the CTXSYS schema was created when Oracle Text was installed  -- it does not yet have permission to select on certain tables it needs to  -- for the below procedure  grant select on dmg.table1 to CTXSYS;  grant select on dmg.table2 to CTXSYS;  grant select on dmg.table3 to CTXSYS;  grant select on dmg.table4 to CTXSYS;  grant select on dmg.table5 to CTXSYS;    create or replace procedure ctxsys.attr_indexing_procedure (      rid  in              rowid,      tlob in out NOCOPY   clob )  is      begin      -- This procedure queries the above five tables to extract text and combine it into a single document placed into tlob  end;  /    begin      ctx_ddl.create_preference('dmg.my_datastore', 'user_datastore' );        ctx_ddl.set_attribute( 'dmg.my_datastore', 'procedure', 'CTXSYS.attr_indexing_procedure' );  end;  /    begin      ctx_ddl.create_preference( 'dmg.my_index_lexer', 'BASIC_LEXER' );      ctx_ddl.set_attribute( 'dmg.my_index_lexer', 'base_letter', 'YES');  end;  /    begin     ctx_ddl.create_preference('dmg.MY_STEM_FUZZY_PREF', 'BASIC_WORDLIST');     ctx_ddl.set_attribute('dmg.MY_STEM_FUZZY_PREF','STEMMER','ENGLISH');    ctx_ddl.set_attribute('dmg.MY_STEM_FUZZY_PREF','FUZZY_MATCH','ENGLISH');    ctx_ddl.set_attribute('dmg.MY_STEM_FUZZY_PREF','FUZZY_SCORE','0');    ctx_ddl.set_attribute('dmg.MY_STEM_FUZZY_PREF','FUZZY_NUMRESULTS','5000');    ctx_ddl.set_attribute('dmg.MY_STEM_FUZZY_PREF','SUBSTRING_INDEX','TRUE');    ctx_ddl.set_attribute('dmg.MY_STEM_FUZZY_PREF','PREFIX_INDEX','TRUE');    ctx_ddl.set_attribute('dmg.MY_STEM_FUZZY_PREF','WILDCARD_MAXTERMS','5000');  end;  /    -- I would prefer that this index be owned by dmg but it looks like that would require the create any table privilege on dmg (because oracle text  -- will create tables in the ctxsys schema when creating the index). Therefore, ctxsys will own the text index.  create index ctxsys.ix_doi_attr on table1(column1) indextype is ctxsys.context parameters( 'datastore dmg.my_datastore lexer dmg.my_index_lexer wordlist dmg.DOI_STEM_FUZZY_PREF');    -- Schedule the index to sync every 4 hours  declare    job_num number;    nlsvar varchar2(4000);    envvar raw(32);  begin    select nls_env,misc_env into nlsvar,envvar from dba_jobs where rownum<2 and nls_env is not null and misc_env is not null;    select max(job)+1 into job_num from dba_jobs;    sys.dbms_ijob.submit(job=>job_num, luser=>'CTXSYS', puser=>'CTXSYS', cuser=>'CTXSYS', what=>'ctx_ddl.sync_index(''ctxsys.ix_doi_attr'');', next_date=>sysdate+1/1440,                    interval=>'SYSDATE+240/1440', broken=>false, nlsenv=>nlsvar, env=>envvar);    commit;    dbms_output.put_line('job '||job_num||' has been submitted.');  end;  /  

This script is intended to be run by my personal user, which has the DBA role. All the relevant tables are owned by the "DMG" schema, which has very limited privileges.

As you can see I am using a user_datastore to aggregate text from multiple tables into a single document. Also, I could only makes this work if the indexing procedure, the index itself and the dmbs_job to sync the index periodically were all owned by ctxsys, which seems like a bad idea. Preferably, I would like everything to be owned by the DMG schema.

The biggest issue I ran into was in the create index statement. It would always fail with

SQL Error: ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine  ORA-20000: Oracle Text error:  DRG-50857: oracle error in drvxtab.create_index_tables  ORA-01031: insufficient privileges  ORA-06512: at "CTXSYS.DRUE", line 160  ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 364  29855. 00000 -  "error occurred in the execution of ODCIINDEXCREATE routine"  *Cause:    Failed to successfully execute the ODCIIndexCreate routine.  *Action:   Check to see if the routine has been coded correctly.  

After some digging I figured out that the create index statement was attempting to create some tables under the ctxsys schema called DR$IX_DOI_ATTR$I, DR$IX_DOI_ATTR$K,DR$IX_DOI_ATTR$N, DR$IX_DOI_ATTR$P, and DR$IX_DOI_ATTR$R. I found that the DMG schema could create the index if I gave it the CREATE ANY TABLE privilege, but that is not something I want to do in production.

How can I make everything owned by the DMG schema (i.e. minimum privileges I need to grant the DMG schema)? Is there a way for me to remove the grants on the dmg tables to ctxsys?

EDIT:

I was originally incorrect in saying that giving the DMG schema the CREATE ANY TABLE privilege works. It instead changes the error message to this:

SQL Error: ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine  ORA-20000: Oracle Text error:  DRG-50857: oracle error in drvxtab.create_index_tables  ORA-01536: space quota exceeded for tablespace 'DMG'  ORA-06512: at "CTXSYS.DRUE", line 160  ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 364  29855. 00000 -  "error occurred in the execution of ODCIINDEXCREATE routine"  *Cause:    Failed to successfully execute the ODCIIndexCreate routine.  *Action:   Check to see if the routine has been coded correctly.  

This is a very strange message because there are NO quotas setup anywhere on this database. I have no idea how it is hitting up against a quota limit.

Linked Server Query Results: OPENQUERY vs Distributed Transactions

Posted: 19 Jul 2013 11:30 PM PDT

I have two databases. They are on different servers. One is named REPORTDB and the other is named TRANSDB. TRANSDB is referenced as a linked server on REPORTDB named TRANS.

I execute a query like this from REPORTDB using Four Part Naming and Distributed Transactions:

SET @val = ''  SET @DBName = 'TRANSDB'  SELECT @val = @val + 'CREATE SYNONYM ' + [name] + ' for ' + @DBName + '.dbo.'       + [name] + CHAR(10) + CHAR(13)   FROM TRANS.db.sys.tables;  

I expect the result of this query to generate a CREATE SYNONYM statement for each table in TRANSDB inside my REPORTDB (Table1, Table2, Table3). However, it only creates one statement and that is the last table that is returned in the result set from TRANSDB (Table3).

This correctly returns three CREATE SYNONYM statements for Table1, Table2, and Table3 using OPENQUERY:.

SET @val = ''  SET @DBName = 'TRANSDB'  SELECT @val = @val + 'CREATE SYNONYM + [name] + ' for ' + @DBName + '.dbo.'       + [name] + CHAR(10) + CHAR(13)   FROM OPENQUERY( TRANS, ' select [name] FROM  db.sys.tables')  

Since I do not wish to use openquery, how can I get the DISTRIBUTED TRANSACTION using four part naming to return results correctly?

Merge Replication Subquery Filters

Posted: 20 Jul 2013 05:19 AM PDT

I am using merge replication with SQL 2012. I have a number of articles in my publication and will have to come up with quite an elaborate set of filters.

If I add a filter on my article it can contain a sub query, but I noticed it doesn't pick up on changes properly.

For instance Table1, and Table2.

Filter on Table1 is

select * from Table1 where id in (select id from Table2)  

If more records are added to Table2 I would want the contents of Table1 to be changed accordingly when the sync process next occurs.

I found out that this is not working. Is it because when I tried it Table2 was not an article in my merge replication publication, or is it because merge replication doesn't support subqueries like this and recognising when a filter has changed.

What it would mean is that the sync process would have to scan each filter, and work out what tables the subquery relates to and figure out whether these have changed. Does it do something like this?

UPDATE:

I have tried a number of things, but they don't work.

1) Subquery - doesn't work. Here it confirms that this won't work      http://support.microsoft.com/kb/324362  2) Subquery calling a function - doesn't work  3) Subquery calling a function which takes a guid parameter which I change every time - doesn't work  

So the only other option mentioned is a view, but I cannot get the view to work either.

The first problem is that if I replicate a view it just gets replicated as a view and the underlying table needs to be there. But Simon I read your other post and that is not what you are saying.

I think what you are saying is that I should be able to select a table in my replication publication, and then filter it based on a view like this,

select cols from vwUserFilter  

But when I look in management studio I see a filter for my table which looks like this,

SELECT <published_columns> FROM [dbo].[Berm] WHERE   

And that part of my filter is fixed, I cannot edit [dbo].[berm] and replace it with a view.

So what I think you are suggesting is that I can change the above to,

SELECT <published_columns> FROM [dbo].[BermView]  

And the BermView is a view which filters the Berm table. Is that right, and if so how?

No comments:

Post a Comment

Search This Blog