Monday, October 14, 2013

[how to] Index spool after created PK and index?

[how to] Index spool after created PK and index?


Index spool after created PK and index?

Posted: 14 Oct 2013 08:37 PM PDT

I have the following query. It runs slow but get some value when there is not a PK on hugeTable. The estimated execution plan shows half of the cost is "RID Lookup (Heap) [hugeTable] 51%".

I added a PK on hugeTable and created an index cover all the columns for the subquery of pivot. Then 80% of the cost is "Index Spool (Eager Spool)" on the cover index. (It did Index scan first (4%)).

How to avoid the "Index spool" on the hugeTable?

select ..., [...], [...], ...  from ....      T1 ...      outer apply (          select k1, k2, [...], [...], ...          from (              select k1, k2, col, value              from hugeTable               where k1 = T1.K1 and k2 = T1.K2              ) p pivot (sum(value) for col in ([...], [...], ...)) as pvt          ) a pvt  

Error when creating an index in SQL 2012

Posted: 14 Oct 2013 08:35 PM PDT

I'm trying to create the following index on our test environment (SQL 2012):

CREATE NONCLUSTERED INDEX [iTransaction-OriginatingTransaction] ON [dbo].[Transaction]  (      [OriginatingTransactionID_EOT] ASC,      [OriginatingTransactionID] ASC  )  

on our transaction table:

CREATE TABLE [dbo].[Transaction](      [TransactionID] [int] IDENTITY(1,1) NOT NULL,      [OriginatingTransactionID_EOT] [varchar](50) NULL,      [OriginatingTransactionID] [int] NULL,      [Status] [int] NOT NULL,      [BusinessID] [int] NOT NULL,      [CreatedUserID] [int] NOT NULL,      [CreatedDateTime] [datetime] NOT NULL,      [ModifiedUserID] [int] NOT NULL,      [ModifiedDateTime] [datetime] NOT NULL,   CONSTRAINT [PK_Transaction] PRIMARY KEY CLUSTERED   (      [TransactionID] ASC  )  

I get the following error:

Msg 681, Level 16, State 3, Line 2  Attempting to set a non-NULL-able column's value to NULL.  The statement has been terminated.  

If I run the same create index statement on our live database (SQL 2005) then it works without any errors.

Why can't I create this index on SQL 2012?


UPDATE: Results of dbcc checktable:

Msg 8944, Level 16, State 13, Line 1  Table error: Object ID 954995248, index ID 1, partition ID 72057594796965888, alloc unit ID 72057595543093248 (type In-row data), page (1:701352), row 13. Test (ColumnOffsets <= (nextRec - pRec)) failed. Values are 7198 and 186.  Msg 8944, Level 16, State 13, Line 1  Table error: Object ID 954995248, index ID 1, partition ID 72057594796965888, alloc unit ID 72057595543093248 (type In-row data), page (1:701352), row 13. Test (ColumnOffsets <= (nextRec - pRec)) failed. Values are 7198 and 186.  Msg 8928, Level 16, State 1, Line 1  Object ID 954995248, index ID 1, partition ID 72057594796965888, alloc unit ID 72057595543093248 (type In-row data): Page (1:701352) could not be processed.  See other errors for details.  Msg 8976, Level 16, State 1, Line 1  Table error: Object ID 954995248, index ID 1, partition ID 72057594796965888, alloc unit ID 72057595543093248 (type In-row data). Page (1:701352) was not seen in the scan although its parent (1:709848) and previous (1:700903) refer to it. Check any previous errors.  Msg 8978, Level 16, State 1, Line 1  Table error: Object ID 954995248, index ID 1, partition ID 72057594796965888, alloc unit ID 72057595543093248 (type In-row data). Page (1:701353) is missing a reference from previous page (1:701352). Possible chain linkage problem.  DBCC results for 'Transaction'.  There are 2354636 rows in 57941 pages for object "Transaction".  CHECKTABLE found 0 allocation errors and 5 consistency errors in table 'Transaction' (object ID 954995248).  repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKTABLE (Redebiz.dbo.Transaction).  DBCC execution completed. If DBCC printed error messages, contact your system administrator.  

Can MySQL Workbench generate data?

Posted: 14 Oct 2013 07:10 PM PDT

Is there a way to generate mass rows of dummy data in a table for testing in workbench?

If not, are there any free tools out there that is able to do that?

Where EVENTS history are logged/recorded in MySql?

Posted: 14 Oct 2013 08:51 PM PDT

I just setup some stored procedures to run in the event scheduler using CREATE EVENT.

I'm trying to find where (if somewhere) the history of it running is stored. I looked at the docs but couldn't find anything.

Is there some table or log where I can see that my scheduled events successfully ran?

Thanks!

Join on tables (SQL)

Posted: 14 Oct 2013 06:12 PM PDT

I am a newbie to SQL and i have a question here. I am working in Postgresql. My tables look like

change

     id action_id(fk)   field_id(fk)    old_value   new_value     ------------------------------------------------------------------        39      15              14                           testPool        40      15              15                           testSystem         41      15              16                           61019  

action

     id  description       audited_table     audited_row      audited_type    -----------------------------------------------------------------------------       15  Added system       systemtable        61019          insert  

field

      id       audited_table                name      ---------------------------------------------------------------------------        14       systemtable                 pool        15       systemtable                 storagesystem        15       systemtable                 id  

I would want to write a query to get the following view

  id        description                  audited_table               audited_row      audited_type   field.name1     name1->new_value  field.name2     name2->new_value  -----------------------------------------------------------------------------------------------------------------------------------------------------------    15        Added system                 systemtable                 61019              insert         pool            testPool        storagesystem    testSystem  

Basically i want to flatten(rows into columns) the 'change' table and join it with 'action' table. Please note that 'Change' table references to 'action' and 'field' tables

Schema diagram looks like below

Any help is appreciated, Thanks

enter image description here

Improve Speed of MySQL Database Import

Posted: 14 Oct 2013 03:36 PM PDT

I'm importing around 44 GB (14 million rows) of information into an InnoDB MySQL database, and I'd like to do so in a reasonable timeframe. Since I'm doing this on a personal computer, I only have 8 GB of RAM available to me (and I'm using 1.5 GB for the MySQL database).

Right now, I'm using LOAD DATA INFILE. I've tweaked a lot of the InnoDB settings, including increasing the buffer pool size and buffer log size. However, the speed of this method seems to drop rapidly as the amount of information in the database increases - the first "chunk" of data (100 MB) loaded in 79 seconds, but by the time I reached the fourth chunk of data, it took 1004 seconds to load the same amount. Unfortunately, this precipitous drop in transfer rate makes it nearly impossible to load all of the data onto the MySQL server in a reasonable timeframe.

I had several questions about the scenario. Why does the transfer rate drop so drastically? I've heard that MyISAM is more efficient for loading data, so should I try to use a MyISAM engine to load this data instead? (I'll be accessing the data only locally) Are there any other techniques I can use to expedite the data loading process?

What db permissions are needed for pgAgent user?

Posted: 14 Oct 2013 12:45 PM PDT

I have successfully set up pgAgent daemon (running on the same server as Postgres 9.3). I would like to restrict permissions on pgAgent. Created 'pgagent' login role, and granted it (via group role) all permissions on postgres.pgagent schema:

CREATE ROLE pgagent LOGIN  NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;  CREATE ROLE scheduler  NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;  GRANT scheduler TO pgagent;  grant all on schema pgagent to scheduler;  grant all on all tables in schema pgagent to scheduler;  grant all on all functions in schema pgagent to scheduler;  grant connect on database postgres to scheduler;  

However pgAgent refuses to execute any jobs, and just sits there idly. No error messages in logs. If I start pgagent as 'postgres' user it runs fine. Or, if I 'grant postgres to scheduler' it also runs fine.

What permissions am I missing here?

Configuring MySQL for Power Failure

Posted: 14 Oct 2013 12:25 PM PDT

I have absolutely no experience with databases and MySql. Now the problem is I have an embedded device running a MySQL database with a web based application. The problem is when I shutdown my embedded device it just cut off the power, and I can not have a controlled shutdown. Given this situation how can I configure MySql to prevent it from failures and in case of a failure, I should have maximum support to recover my database.

While searching this, I came across InnoDB Engine as well as some configuration options to set like sync_binlog=1 & innodb_flush_log_at_trx_commit=1. I have noticed my default Engine is InnoDB and binary logs are also enabled. What are other configurations to make for best possible failure & recovery support.

Regards, Farrukh Arshad

"Set Password" with a limited user

Posted: 14 Oct 2013 04:20 PM PDT

I'm using MySQL 5.6 and I want to have a limited service-user which is able to create/modify the users of my databases.

Now my problem is that I can quite figure out what privileges my service-user needs to perform all the user administration tasks (create user, grant/revoke db privileges, set password)

a) If I give him the global "create user" and "reload" privileges, he can't use "set password" [= MySQL Error: 1044 (Access denied)]

b) If I give him "select", "insert", "update" on 'mysql'.'user' and 'mysql'.'db', he can't use "set password" [= MySQL Error: 1044 (Access denied)]

c) If I give him "select", "insert", "update" on 'mysql'.*, he CAN use "set password"

I'd like to understand why this is happening, and how to achieve my approach a) or b). I don't want to use approach c).

Can somebody help me out?

Thank you!

Benjamin

Not sure why code is failing when CONCAT_NULL_YIELDS_NULL is set to OFF

Posted: 14 Oct 2013 12:18 PM PDT

In my question over at Pull every other row, starting a 0, and concatenate them together user @AaronBertrand provided me with a chunk of code that worked fine in test but as soon as I put the procedure using the code into place and tested from the calling app I get an error.

SELECT error: SELECT failed because the following SET options have incorrect settings: 'CONCAT_NULL_YIELDS_NULL'

In testing if I set CONCAT_NULL_YIELDS_NULL OFF the procedure will fail when called from SSMS but if I set CONCAT_NULL_YIELDS_NULL ON it works fine, further testing is showing the calling app to explicitly be setting CONCAT_NULL_YIELDS_NULL OFF and I can not override that. So I need to figure out how to update the code block to prevent this problem.

SELECT @TreeLocationStructure = STUFF (      (          SELECT ' -> ' + TreeLocation          FROM (              SELECT rn = ROW_NUMBER()              OVER (ORDER BY ID), TreeLocation              FROM #TreeSplit          ) AS x(rn, TreeLocation)          WHERE rn % 2 = 1          ORDER BY rn FOR XML PATH,TYPE      ).value('.[1]','nvarchar(max)'), 1 ,4, ''  );  

SORT, ORDER by Number

Posted: 14 Oct 2013 11:46 AM PDT

I have the following table:

ID     Name    Items  -------------------------------------   1      John    7  2      Peter   533  3      Chang   13  4      Mike    9100  

I want to order it by Items. I used ORDER BY items ASC, but it returns:

ID     Name    Items  -------------------------------------   3      Chang   13  2      Peter   533  1      John    7  4      Mike    9100  

I want to return:

ID     Name    Items  -------------------------------------   3      Chang   13  1      John    7  2      Peter   533  4      Mike    9100  

I think this might be a silly question, but I really don't have any ideas about how to solve this.

Thanks.

how to trigger a statistics update

Posted: 14 Oct 2013 12:25 PM PDT

I have a certain table in my OLTP database, which is bulk-updated by several users.
There is no way to know when they will update the table (somewhere between 5 times a day and once in a week).

The problem is, that the update does not cause the statistics to be automatically updated,
but it is a big enough update to cause SQL Server to use a poor query plan.
(the table has ~500k rows and the operation inserts/updates between 5k and 20k rows)

My question is how should i trigger a statistics update?
My thoughts:

  • A job which will run every 30 minutes and check for changes using rowmodctr. The job will then update statistics if necessary
  • A DML trigger which will check for changes and start a job when necessary. The job will update statistics

Of course the solution has to be server side, application changes are not welcome :)

One more thing is that I might need the same or similiar for more than just this one table so the solution has to be generic (use a modified-sp_updatestats with parameters for... everything?).

SQL Server 2012 Restore backup to new DB Name

Posted: 14 Oct 2013 09:30 AM PDT

I seem to remember that, in 2008, you could restore a backup to a new copy of a DB, by changing the name in the "Destination Database" field from the restore-wizard. It would create a brand new DB, which is a copy of the original DB restored to the point in time that you wanted. I have not for the life of me figured out how to make SQL 2012 do this.

Now, I understand (thanks to Aaron Bertrand) that this didn't really change, and that 2012 is actually making it more obvious to me that this strategy was a bad idea in the first place!

So, what I need to do is this: Create a new DB, 'MyDB_Copy', from an existing DB, 'MyDB', by using its backup files. We have nightly full-backups (.bak) and every-15-minute TLogs (.trn). I don't want the existing 'MyDB' to be affected/touched at all, because it's "live".

After the MyDB_Copy is created from the main full-backup file, I then need to restore a few dozen TLog backups to get it to a certain point in time.

Average of a row or record across three columns

Posted: 14 Oct 2013 09:55 AM PDT

The avg command in SQL works particular column data. But here, we want to calculate the average of three such columns for each record. In math, we would do

avg=(col1 + col2 + col3)/3

. Similarly is there any query to calculate avg(col1, col2, col3.....) ?

Is it viable to attribute permissions to SQL Server \virtual accounts (NT Service\SQLSERVERAgent, etc)?

Posted: 14 Oct 2013 11:50 AM PDT

Using MS SQL Server 2012 in Microsoft Windows Server 2008.

I am somewhat confused by the New Account Types Available with Windows 7 and Windows Server 2008 R2 virtual Windows accounts [NT SERVICE]\<SERVICENAME> like NT SERVICE\MSSQLSERVER, NT SERVICE\SQLSERVERAGENT, etc.

Is it viable of daring to give permissions to, for example, [NT Service\SQLSERVERAGENT]' to access a shared resource (or local file or directory)?
And, how to do this?
For example, to attribute permissions to a Windows file share to run psexec ?

While browsing (or pressing button Find< or adding group and users to a file share) the available in domain and/or local/remote computer, there is no such accounts available and entering it manually give an error:

"An object (User or Built-in security principal) with the following cannot be found..."

enter image description here

Related (though different) question that provoked this one: How to copy bak files to remote share (without AD/domain accounts involvement)?

How to copy backup files to remote share in SQL Server Agent job without AD/domain accounts involvement?

Posted: 14 Oct 2013 06:43 PM PDT

MS SQL Server 2012... with nightly databases backups to the same/local as SQL Server machine...

I am trying to add another SQL Server Agent Job to copy the .bak files to remote (non-windows, i.e. Linux) share with non-Windows (non-AD) user/password credentials. I do not have any access to configuring or changing that access which is under control of other, quite remote people.

For this (copying) I created local user with the same user name and password, gave it permissions to the (source or local) backup-folders upon which all perfecly works from command line (Win + К or cmd) if to enter the command manually:

RUNAS /user:UserName /savecred "robocopy d:\SQLBACKUP  \\10.195.xx.yyy\backup /S /purge /MAXAGE:7 /MT:1 /Z"     

but fails to run as SQL Server Agent job (type of step is "Operating System(CmdExec)". SQL Service Agent (with standard configuration of running under [NT Service/SQLServiceAgent] account, the job is owned by SA SQL Server superuser).

Can anybody explain me why it is failing and how to correctly make it running (taking into account that I do not have access to domain users configuration)?

Doing a point in time restore with CDC enabled; Possible?

Posted: 14 Oct 2013 09:54 AM PDT

I discovered this week, the hard way, that the database restore options NORECOVERY and KEEP_CDC are mutually exclusive. So, this begs the question. How do you do a database restore, keeping CDC intact, with both full and log backups?

Doing research on MSDN etc. I cannot find any documentation on restoring a database with KEEP_CDC using any other option than a single full db restore with RECOVERY specified.

I was able to find one attempt that did the full and subsequent logs without the keep_cdc option waiting until the final log. Only then was the table brought online with the RECOVERY and KEEP_CDC option. The result was a corrupt CDC schema as demonstrated here.

If the intent is to KEEP_CDC on restore are you truley limited to a full backup only or is there a mechanism similar to the attempt above to keep it intact during multi-file restors on a server other than the original?

Finding swap causes of MySQL

Posted: 14 Oct 2013 02:17 PM PDT

In my centos 6.3 server I have a MySQL 5.5.33 database.
It has 17 tables (15 InnoDB, 2 MyISAM) and total records 6.7M rows. I refactored my schemas and added indexes for my slow logs. My average query time is 20-30 ms. And my database performs well.

But I have some cron queries that runs every 3 hours. They don't use any index, they runs very slow and every query runs nearly 1500-2000 ms. I don't plan to add new indexes for them, because in that case I have to add many indexes and that queries run very rare.

When I restart my database server, -normally- swap is zero. After some time swapping becomes large gradually. After 13 days, I get 650MB swap of MySQL. I want to find what causes this swapping and try to reduce the swap without performance grade.

I want to be sure that the cause is cron queries or some other thing causes this swap size.

My top results:

top - 13:33:01 up 13 days, 11:04,  1 user,  load average: 0.77, 1.02, 1.07  Tasks: 148 total,   1 running, 147 sleeping,   0 stopped,   0 zombie  Cpu(s): 27.4%us,  5.3%sy,  0.0%ni, 59.1%id,  7.8%wa,  0.0%hi,  0.3%si,  0.0%st  Mem:   1020564k total,   854184k used,   166380k free,    73040k buffers  Swap:  2097144k total,   643036k used,  1454108k free,    94000k cached      PID USER        PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  SWAP COMMAND   9573 mysql       20   0 2336m 328m 3668 S  7.3 33.0 349:14.25 554m mysqld  15347 examplecom  20   0  219m  32m  10m S  2.7  3.2   0:02.66    0 php-cgi  15343 examplecom  20   0  215m  28m  10m S 10.0  2.9   0:05.80    0 php-cgi  15348 examplecom  20   0  215m  28m  10m S 12.3  2.8   0:03.62    0 php-cgi  15346 examplecom  20   0  215m  28m  10m S  9.6  2.8   0:06.39    0 php-cgi  15350 examplecom  20   0  212m  25m  10m S 10.0  2.6   0:02.19    0 php-cgi  15345 examplecom  20   0  211m  24m  10m S  6.6  2.5   0:04.28    0 php-cgi  15349 examplecom  20   0  209m  22m  10m S  5.3  2.2   0:02.66    0 php-cgi  12771 apache      20   0  334m 5304 2396 S  0.0  0.5   0:02.53  10m httpd  12763 apache      20   0  335m 5224 2232 S  0.3  0.5   0:02.33  11m httpd  

Edit: I restarted mysql server 2 days ago, so swap is low now. But with time passes it again will become bigger. When I make top I get this now:

top - 23:30:46 up 15 days, 21:01,  1 user,  load average: 0.35, 0.42, 0.42  Mem:   1020564k total,   931052k used,    89512k free,    76412k buffers  Swap:  2097144k total,   280528k used,  1816616k free,   233560k cached      PID USER        PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  SWAP COMMAND  23088 mysql       20   0 1922m 311m 3440 S  1.0 31.3  50:04.53 143m mysqld  10081 examplecom  20   0  216m  28m  10m S  4.0  2.8   0:01.67    0 php-cgi  10069 examplecom  20   0  215m  27m  10m S  3.3  2.7   0:04.81    0 php-cgi  10070 examplecom  20   0  215m  26m  10m S  8.3  2.7   0:04.75    0 php-cgi  10062 examplecom  20   0  215m  26m  10m S  6.0  2.7   0:06.26    0 php-cgi  10060 examplecom  20   0  214m  25m  10m S  5.3  2.6   0:07.51    0 php-cgi  10074 examplecom  20   0  214m  25m  10m S  6.6  2.6   0:03.01    0 php-cgi  10080 examplecom  20   0  212m  23m  10m S  6.0  2.4   0:01.58    0 php-cgi  

When I make free -m I get this:

             total       used       free     shared    buffers     cached  Mem:           996        927         68          0         76        219  -/+ buffers/cache:        631        364  Swap:         2047        273       1774  

My /etc/my.cnf file contents:

[mysqld]  datadir=/var/lib/mysql  socket=/var/lib/mysql/mysql.sock  user=mysql  symbolic-links=0  skip-name-resolve    slow_query_log=ON  long_query_time=1.2    innodb_file_per_table    max_allowed_packet=32M  thread_stack=256K  max_allowed_packet=32M  thread_stack=256K  max_connect_errors=100000000  max_connections=600    key_buffer=256M  sort_buffer_size=2M  read_buffer_size=2M  read_rnd_buffer_size=2M    thread_cache_size = 8  tmp_table_size=128M  max_heap_table_size=128M  query_cache_size = 209715200  query_cache_limit = 52428800  join_buffer_size=4M  table_cache=2400  low_priority_updates=1  tmpdir = /var/tmp    query_cache_type = 1    innodb_buffer_pool_size=256M  innodb_additional_mem_pool_size=512K  innodb_log_buffer_size=500K  innodb_thread_concurrency=8    [mysqld_safe]  log-error=/var/log/mysqld.log  pid-file=/var/run/mysqld/mysqld.pid  

Modeling a database for easy counting / reporting

Posted: 14 Oct 2013 08:26 PM PDT

I have an app where user is known (user_id) and he can do several actions (action_id). Every time he makes an action I need to save the fact that he made it for reports/analytics. I guess it is similar to other analytic solutions and their db design.

Once I have the data, provided with a time window (minutes resolution) I need to count for each user (all or some) the number of times he did actions and which actions he did. (sum all data grouped by action_id).

Some assumptions:

  • The number of users are ~1000.
  • Action types are ~100.
  • Actions can happen 24/7.
  • The time windows can span from minutes to days and are random.
  • A time window can't go back more than 30 days.

I'm considering SQL, NoSQL and RRD to save the data.

I put RRD here because it's easy to implement the insert of the data into statds+graphite. I'm concerned if I take this approach, the querying (although provided by graphite) will not be indexed and will probably have to count all the data whenever I ask for a window/user (no indexing). Another problem is that when querying all the data, all users info will be needed, resulting in reading all the files concurrently which I'm not sure is a good thing.

SQL - Very easy implementation when inserting the data and querying. Easy to index, order and group by. However I'm not sure it's easy if I'm anticipating high traffic. Also, I'm not sure how effective is the count() of sql (haven't used SQL in the last few years) after group by. Can it offer parallel computation?

NoSQL - Is there a solution out there that is the right fit for this type of scenario (perhaps a Map/Reduce algorithm to fast generation of counts in a time window?)

Thanks for helping me model

Putting a Select statement in a transaction

Posted: 14 Oct 2013 01:26 PM PDT

What is the difference between these 2 queries:

start transaction;  select * From orders Where id=1;  UPDATE orders SET username="John" Where id=1;  commit;  

And without transaction:

select * From orders Where id=1;  UPDATE orders SET username="John" Where id=1;    

What is the effect of having a SELECT inside a transaction?

If Delete From orders Where id=1 was called from another session right after the Select in both cases, when will it be processed?

Problem compiling view when it is referencing a table in an other view: insufficient privileges

Posted: 14 Oct 2013 11:26 AM PDT

Oracle 11g R2 Logged on: SYS / AS SYSDBA

When I try to compile or create a view that is referencing local schema tables. It works fine.

Problem does occur when I try to compile the same view referencing a table in another schema like schema.table in my query.

Oracle throws the exception ORA-01031: insufficient privileges.

Remember I am using SYS account (sysdba).

representation in ms-access

Posted: 14 Oct 2013 07:26 PM PDT

I have a database in microsoft access. I want to know how to look up a singular datum from a reference table giving a dynamic set of values. Here is a representation of what I mean:

I have the following tables:

Points for Pushups(m):

Reps      Age 17-21,          Age 22-26,         Age 27-31    1            6                  7                    8    2            7                  9                    9    3            9                  11                  12  

Fitness Tests:

Name  Reps   Test Date    Bob      2            1 jan 2009    Jill     1            5 may 2010  

People:

Name         DOB    Bob      1 jan 1987    Jill     2 feb 1985    Sal      3 Mar 1991    

I want the query to use People.DOB and the Test date to find the age the person was during the test. I then want the query to use this value to determine which column to look in, and the value from reps to determine which row to look in coming back with the singular value and naming it points.

for example I want bob to show

Query:

Name      DOB            Age AtTest   Reps      Points    Bob      1 Jan 1987         22         2          9  

Does anyone know how to do the dynamic reference part?

I know how to make the query and I know how to get age I just don't know how to use the values as columns in the reference table, I've seen it done, but long ago and never looked into it.

SQL Server Sproc calls from IBM Cast Iron orchestration with XML payload fail

Posted: 14 Oct 2013 12:13 PM PDT

I'm hoping that someone on this forum may have experience with an IBM appliance, Cast Iron (Websphere), and how it interacts with SQL Server. I know very little about the Cast Iron appliance, and its capabilities, so excuse my ignorance. The scenario is that I have created a series of stored procedures that pass their payloads as well-formed XML as either an input or output parameter based on the type of call (get or set) (we've tried using well-formed XML as native SQLXML data type and as string (VARCHAR(max) or NVARCHAR(max)).

Sample:

DECLARE @XMLPayloadIn XML = '<root></root>';  EXEC StoredProcedureName @XMLPayloadIn;  

Or

DECLARE @XMLPayloadOut XML = NULL;  EXEC StoredProcedureName @XMLPayloadOut OUTPUT;  

The challenge is that the procedure call in Cast Iron's (editor) is limiting | truncating the well-formed XML at 4000 characters.

Anyone have any experience with this scenario at all?

One other bit of information, when "hooking up" the call to the stored procedure using parameters to pass the XML doc, the data type (SQLXML or VARCHAR | NVARCHAR) is actually showing a length of (0) whereas, if you pass the XML | string as a result set directly to the caller, the length is >0, and well-formed XML is returned.

Any help is greatly appreciated!

How to setup SQL active/active cluster to achieve Blue / Green instance switching?

Posted: 14 Oct 2013 05:26 PM PDT

I am wondering if anyone has ever used a multi-instance cluster (nee 'Active/Active') to achieve blue/green (or A/B) deployment scenarios, and what the best way of configuring it is (using SQL 2012 / Windows 2008 R2)?

To be specific, the scenario I want to achieve is to be able to switch between which cluster instance is being connected to by clients without either the clients or the SQL instances knowing (I stress I'm not talking about node failover here). I'm envisaging that the best way to achieve this is something like:

  • Setup 2 node cluster, each of which has InstanceA and InstanceB instances
  • Configure both InstanceA and InstanceB to listen as if they were the default instance on their cluster address (given each instance on a cluster has it's own unique IP)
  • Use DNS to switch which virtual address clients actually connect to.

This should hopefully enable me to do the following:

  • Deploy database to instance A, and have clients connect to it via DNS alias as if default instance
  • Deploy new version of database to instance B
  • Vet new version of database (connecting explicitly to cluster\InstanceB)
  • Redirect DNS alias to point to instance B's cluster name
  • Clients now connect to InstanceB without realising anything's changed
  • Both instances can still failover to the other node in a true outage

Joining the dots, it seems like this should be possible:

... but I've never seen a full example. Has anyone done it? Will what's proposed above work? What have I missed?

Time series data for ad platform

Posted: 14 Oct 2013 12:26 PM PDT

I am trying to figure out how to store time series data for an ad platform I am working on.

Basically I want to know some strategies/solutions for storing billions of rows of data so that I can easily search it (about 6-8 indexes on the table) and get fast counts based on queries.

I tried mySQL with the tokuDB engine and this seems to be very fast but is extremely slow when I try to do a COUNT query when the rows reached about 5-8 million.

I was looking at some noSQL alternatives but since I want to be able to search this data this is probably not the best solution. I was using dynamoDB. I would have had to store the data is many places in order to account for all the searching on the data.

What I am storing is a row in the database for each click on an AD that occurs. This table will grow very fast, especially when this site gets large.

Another solution would be to separate this data per advertiser. This means each advertiser will have their own table where all their data goes into. This means it will be much smaller and the COUNT queries will be much faster. I can even split it up by advertiser and month.

My goal is to give an advertiser the ability to search and display in a paginated way all their clicks. They should be able to get data between a time period and filter by about 5-8 other indexes if they want to.

If an account has REQUIRE SUBJECT, does it still need a password?

Posted: 14 Oct 2013 06:26 PM PDT

I'm in the process of setting up SSL-secured replication between two servers. Each server has its own public/private keypair, and the CA cert is just the concatenation of the two public certs, like this answer.

Now I'm updating the replication account with REQUIRE SUBJECT "exact subject of the client"

Is there any practical value to also having a password on the replication account (IDENTIFIED BY "secret")?

How do I determine if a database is the principal in a mirroring setup?

Posted: 14 Oct 2013 08:07 PM PDT

I have two database servers Server1 and Server2, configured with mirroring. A single database, MirrorDB, is mirrored. There is another database on Server1 named OtherDB which is only present on Server1 and is not mirrored. OtherDB has a stored procedure named SP_Z which refers to a table in MirrorDB to compute some value.

When Server1 is the principal for MirrorDB the SP_Z stored procedure in OtherDB works perfectly, however when MirrorDB fails over to Server2 the SP_Z in OtherDB fails as it can not open MirrorDB.

How do I solve this problem?

Converting dbo file from windows to linux

Posted: 14 Oct 2013 10:26 AM PDT

I have a .dbo file which is created from windows. This file is succesfully reloaded into the mysql database in windows. I need to reload the dbo file into the mysql/mariadb database in linux. How I convert the file that was created from windows to linux?

How to execute some script when database starts up

Posted: 14 Oct 2013 07:32 PM PDT

I want to execute some T-sql code, when a database starts up.

the objective is to resume a session of mirroring in case the session is "suspended" when the database comes "back to life".

So I've done this code:

begin try      Declare @state_desc nvarchar(60)      SELECT @state_desc = mirroring_state_desc       FROM SYS.database_mirroring       WHERE database_id = DB_ID('MyDataBase')      if @state_desc = 'SUSPENDED'      begin          ALTER DATABASE [MyDataBase] SET PARTNER RESUME      end  end try  begin catch  end catch  

but how can I make it when the sql server starts up?

I used this with no success:

create procedure dbm_startup_resume                                  as                                  begin  begin try      Declare @state_desc nvarchar(60)      WAITFOR DELAY '00:00:10'      SELECT @state_desc = mirroring_state_desc       FROM SYS.database_mirroring       WHERE database_id = DB_ID('MinhaBaseDeDados')      if @state_desc = 'SUSPENDED'      begin        ALTER DATABASE [MinhaBaseDeDados] SET PARTNER RESUME      end    end try    begin catch    end catch    exec sp_procoption @ProcName = dbm_startup_resume,       @OptionName = startup, @OptionValue = 'on'    end  

No comments:

Post a Comment

Search This Blog