Saturday, July 13, 2013

[how to] COUNT of rows with parent id equal to the row we have with a parent id of 0 in one query [on hold]

[how to] COUNT of rows with parent id equal to the row we have with a parent id of 0 in one query [on hold]


COUNT of rows with parent id equal to the row we have with a parent id of 0 in one query [on hold]

Posted: 13 Jul 2013 09:00 PM PDT

I'd like to do the following in one query:

  • grab a row that has a parent_id of 0
  • grab a count of all the rows that have a parent_id of the row that we grabbed which has a parent_id of 0

How can I accomplish this in one query? Please let me know if you need more information, I will GLADLY be as assistive as I can

Heres an example of what im doing now:

select id from messages where parent_id=0

AND THEN

select id from messages where parent_id={{previously_chosen_id}}

Thanks

Help creating this query

Posted: 13 Jul 2013 08:35 PM PDT

I have the following tables in a MySQL Database:

messages => contains email messages  addresses => contains email addresses  address_parts => contains email address parts (to,from,replyto,cc,bcc)  message_address_parts => connects an email address to an address part and to a message  

What I'd like to accomplish is one query that fetches the messages and within that query the email address of a user that has an address_parts of 4 (from field).

Currently, I have separated the query to obtain the from email address into its own query, but that means I have to run it for each message (so i can find the from email address), when I'd prefer to just get it from the query that grabs the messages.

This is one of the first posts Ive made on DBA Stack. I recognize its not complete. Please assist me in making this question better

Thanks.

Inconsistent trigram indexing runtimes?

Posted: 13 Jul 2013 06:14 PM PDT

I have a table residences...

I installed trgm: create extension pg_trgm;

After adding trigram indexes to both grp and name columns using:

CREATE INDEX residences_name_trgm ON residences USING GIN (name gin_trgm_ops);  CREATE INDEX residences_grp_trgm ON residences USING GIN (grp gin_trgm_ops);  

And checking performance of a simple query... it appears that the total runtime for a query on the name column runs ~100x faster. Why is this?

EXPLAIN ANALYZE SELECT * FROM residences WHERE name ILIKE '%Sutton%';                                                              QUERY PLAN                                                              ----------------------------------------------------------------------------------------------------------------------------------   Bitmap Heap Scan on residences  (cost=36.02..47.90 rows=3 width=1872) (actual time=0.390..0.720 rows=21 loops=1)     Recheck Cond: ((name)::text ~~ '%Sutton%'::text)     ->  Bitmap Index Scan on residences_name_trgm_gin  (cost=0.00..36.02 rows=3 width=0) (actual time=0.354..0.354 rows=21 loops=1)           Index Cond: ((name)::text ~~ '%Sutton%'::text)   Total runtime: 0.814 ms  (5 rows)  

and

EXPLAIN ANALYZE SELECT * FROM residences WHERE grp ILIKE '%Sutton%';                                                                    QUERY PLAN                                                                     -----------------------------------------------------------------------------------------------------------------------------------------------   Bitmap Heap Scan on residences  (cost=97.93..8879.41 rows=5927 width=1872) (actual time=5.516..115.634 rows=5968 loops=1)     Recheck Cond: ((grp)::text ~~ '%Sutton%'::text)     ->  Bitmap Index Scan on residences_grp_trgm_gin  (cost=0.00..96.45 rows=5927 width=0) (actual time=4.366..4.366 rows=5968 loops=1)           Index Cond: ((grp)::text ~~ '%Sutton%'::text)   Total runtime: 119.779 ms  (5 rows)  

For reference, both name and grp columns are CHARACTER VARYING(50).

Futhermore... counting distinct values in each column yields:

SELECT COUNT(DISTINCT grp) FROM residences; -> 421

SELECT COUNT(DISTINCT name) FROM residences -> 7750

That is, the name column actually has ~20x more distinct values, but returns values much faster.

More information on this technique can be found on depesz's article WAITING FOR 9.1 – FASTER LIKE/ILIKE and another similar tutorial on the PaliminoDB blog.

How to use a database accross two availability zones

Posted: 13 Jul 2013 06:07 PM PDT

I have servers in the USA and Japan for my website, at the moment they are both connecting to a MySQL server in the USA, which is obviously perfect for my local USA clients, but quite slow for my Japanese clients.

What is the best strategy for reducing latency? How can I introduce a local mysql server for the Japanese end and sync these up?

I have considered master-master replication but this appears to be an incomplete and unstable solution.

Are there any other suggestions?

Should I use a separate table to store image file names?

Posted: 13 Jul 2013 04:58 PM PDT

I am learning about database design and I'm writing a Java GUI program with a back-end database. The main table in my database stores a different product in each row. I have some columns such as: product_id (PK), price, stock_quantity, etc. I also have eight columns that store the names of the file names for that product's images: img_file_1, img_file_2 ... img_file_8. The Java program uses those file names to find the images to display on the screen for that product.

Is this poor design? Should I be storing these file names in their own table, and adding a foreign key to that table in my main table?

The program works fine as is, but I want to make sure I am learning good habits.

SPI postgresql - insert data into table from postgres.c

Posted: 13 Jul 2013 04:45 PM PDT

I'm working inside postgres.c of postgresql. I need to insert some information in a table and I'm trying to do it with SPI using these three lines:

SPI_connect();  SPI_exec("INSERT INTO testvalues (11,6)", 5);  SPI_finish();  

but when I start the server and I send a query I get segmentation fault:

LOG:  server process (PID 13856) was terminated by signal 11: Segmentation fault  LOG:  terminating any other active server processes  WARNING:  terminating connection because of crash of another server process  DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.  HINT:  In a moment you should be able to reconnect to the database and repeat your command.  FATAL:  the database system is in recovery mode  LOG:  all server processes terminated; reinitializing  LOG:  database system was interrupted; last known up at 2013-07-14 00:40:22 CEST  LOG:  database system was not properly shut down; automatic recovery in progress  LOG:  record with zero length at 0/17D7368  LOG:  redo is not required  LOG:  autovacuum launcher started  LOG:  database system is ready to accept connections  

Do you have any idea how I can fix it? Is there a better way to insert data from the postgres.c?

MariaDB CUstom Functions

Posted: 13 Jul 2013 10:02 AM PDT

In MySQL, I wrote functions in C++, compiled them as .so, and added them to MySQL. I am trying to find a website that shows an example for doing the same in MariaDB. I cannot find any instructions, tutorials, or examples of writing custom functions in MariaDB. Does anyone know of any or is anyone willing to write a very simple one here - like a md10 function that returns concat(md5(val),md5(val))?

Just to make it clear - I am looking to write compiled code, not define a function in SQL. It isn't just for speed. We need to clone the boxes and have all MariaDB engines fully functional.

Error: "Storage Engine for the Table Doesn't Support Nullable Columns" (SequelPro)

Posted: 13 Jul 2013 01:20 PM PDT

I'm trying to load a very normal .csv file (that was created from Excel 2011 for Mac) into SequelPro (using MySQL) with my Mac -- and I've recently started getting this error consistently. Can anybody let me know what it is and how to fix it?

An error occurred while trying to add the new table 'wblist' by    CREATE TABLE `wblist` (  `FILE` VARCHAR(255),   `FIRSTNAME` VARCHAR(255),   `MIDDLE` VARCHAR(255),   `LASTNAME` VARCHAR(255),   `FULLNAME` VARCHAR(255),   `GENDER` VARCHAR(255),   `ADDRESS` VARCHAR(255),   `CITY` VARCHAR(255),   `STATE` VARCHAR(255),   `ZIP` VARCHAR(255),   `PHONE` BIGINT(11),   `UNIT` VARCHAR(255),   `JOB` VARCHAR(255),   `AREA` VARCHAR(255),   `TIME` VARCHAR(255),   `MAILINGADDRESS` VARCHAR(255),   `MAILINGCITY` VARCHAR(255),   `MAILINGSTATE` VARCHAR(255),   `MAILINGZIP` VARCHAR(255),   `ID` BIGINT(11),   `CONFIDENCE` VARCHAR(255),   `BIRTHDATE` VARCHAR(255),   `AGE` INT(11),   `RACE` VARCHAR(255),   `ETHNICITY` VARCHAR(255),   `RELIGION` VARCHAR(255),   `PARTY` VARCHAR(255),   `REGISTRATIONDATE` VARCHAR(255),   `VOTERSTATUS` VARCHAR(255),   `OtherPhone` VARCHAR(255),   `POSSIBLEADDRESS` VARCHAR(255),   `POSSIBLEMAILADDRESS` VARCHAR(255),   `RECID` VARCHAR(255)) ENGINE=CSV;    MySQL said: The storage engine for the table doesn't support nullable columns  

This is stopping me before I'm able to import the table. Thanks for the help!

IN and NOT IN for XML column

Posted: 13 Jul 2013 09:40 AM PDT

I have a table with a xml column. Xml is similar to

<Root>    <Row>      <user>abc</user>      <Rowid>1</Rowid>    </Row>    <Row>      <user>vf</user>      <Rowid>2</Rowid>    </Row>    <Row>      <user>ert</user>      <Rowid>3</Rowid>    </Row>    <Maxrowid>3</Maxrowid>  </Root>  

Now below query return sl_no column and myxmlcolumn of rows containing xml column having values 'abc' or 'xyz' in node 'user'().Below query i am using similar to IN option of sql.

SELECT      [mytable].[Sl_no],      [mytable].[myxmlcolumn]      FROM [mydb].dbo.[mytable]      WHERE          [myxmlcolumn].exist('for $x in /Root/Row where (($x/user[fn:upper-case(.)=(''ABC'',''XYZ'')])) return $x') > 0  

I want similar kind of query which does same work as sql 'NOT IN' does. That is in my case i want rows not having values 'abc' or 'xyz' in node 'user'() in xml column. So please help me on this.

alter or recreate?

Posted: 13 Jul 2013 05:38 AM PDT

I am developing a website whose functionality includes allowing user to submit some information to database.

Sometimes users will want to save those info and submit them latter. So they will be written into database.

Since these information have quite complicated structure(They uses several tables) it's kinda tricky to modify(Using alter lots of times) I wonder if it would be more efficient to delete the old info altogether and insert completely new info into database even though the new one and old one are largely the same?

Thanks in advance!

2NF relations of Normalization [on hold]

Posted: 13 Jul 2013 02:11 AM PDT

Is it possible to draw the ERD of 2NF to be 1:M diagram, because the question table is about agent_id and event_id. So it is impossible that customer will reserve the event by using two agent for one event, If I understand correctly.

Please help me and thank you!

Table about reserving event

Agent     agent_id,     agent_name,     agent_tel,     agent_salary    Event     even_Id,     event_name,     event_cusname,     event_date  

Cassandra multidatacenter configuration with 1 external ip

Posted: 13 Jul 2013 06:07 PM PDT

I'm trying to setup a multi-datacenter Cassandra cluster. The problem is that my datacenters have only 1 external IP (wan IP), I can setup port forwarding on the data centers switchs to access each node from the outside world using a different port, but I don't know how to setup the cassandra.yaml file properly.

Is there a way to setup a multidatacenter cassandra cluster in this scenario?.

Thanks in advance!

Dump PostgreSQL without DROP DATABASE

Posted: 13 Jul 2013 03:58 AM PDT

I want dump a database, but, without DROP DATABASE, and CREATE DATABASE instructions, in .backup / .sql (generate file).

Actually I use this command:

pg_dump --format=c --no-privileges --no-owner --file=/tmp/`hostname`-`date +%d%m%Y`.backup --username=admin --host=127.0.0.1 database_name  

But, in top lines of file, I have this:

DROP DATABASE...  CREATE DATABASE...  

So, I my case, I want duplicate this database, and, if I use this script with pg_restore I drop the other database (I don't want that).

optimizing MySQL for traffic analytics system

Posted: 13 Jul 2013 05:20 PM PDT

background :

I've developed a URL shortener system like Bitly with same features , so the system also tracks clickers info and represent as graphs to the person who has shorten the link as analytics data. currently I'm using MySQL and have a table to store click info with this schema:

visit_id (int)  ip (int)  date (datetime)  country  browser  device  os  referrer (varchar)  url_id (int)  //as foreign key to the shortened URL  

and for now , just the url_id field has index

The system should represent click analytics in the time periods the user wants, for example past hour, past 24 hours , the past month , ...

for example to generate graphs for past month , I do following queries:

SELECT all DAY(date) AS period, COUNT( * )                           FROM (                            SELECT *                           FROM visits                          WHERE url_id =  '$url_id'                          ) AS URL                          WHERE DATE > DATE_SUB( CURRENT_TIMESTAMP( ) , INTERVAL 1 MONTH )                           GROUP BY DAY( DATE )    //another query to display clicker browsers in this period  //another query to display clicker countries in this period  // ...  

issues:

  • for a shortened link with about 500,000 clicks , it takes about 3-4 seconds to calculate just the first query , so for total queries about 10-12 seconds which is terrible.
  • lots of memory and CPU is needed to run such queries

questions :

1- how to improve and optimize the structure , so the analytics of high traffic links will be shown in less than 1 second(like bitly and similar web apps) and with less usage of CPU and RAM ? should I make an index on the fields date, country, browser, device, os, referrer ? if yes , how to do that for the field date because I should group clicks some times by DAY(date), sometimes by HOUR(date), sometimes by MINUTE(date) and ...

2- is MySQL suitable for this application? assume at maximum my application should handle 100 million links and 10 billion clicks on them totally. Should I consider switching to an NoSQL solution for example?

3- if MySQL is ok , is my database design and table structure proper and well designed for my application needs? or you have better recommendations and suggestions?

UPDATE: I made an index on column referrer but it didn't help at all and also damaged the performance and I think that's because of the low cardinality of this column (also others) and the big resulting index size related to the RAM of my server.

I think making index on these columns would not help to solve my problem, my idea is about one of these:

1- if using MySQL, maybe generating statistics using background processing for high traffic links is better instead of calculating lively at the user request.

2- using some caching solution like memcached to help MySQL with high traffic links.

3- using a NoSQL such as MongoDB and solutions like Map-Reduce which I am poorly familiar with and haven't used ever.

what do you think?

SSIS organization

Posted: 13 Jul 2013 06:48 AM PDT

I'm familiar with SSMS, but I've never used .net, c# or visual studio (been doing other things: Assembler, c, unix, vi, oracle, etc).

I'm trying to figure out basic 2012 SSIS ETL (no BI or data warehouse) and can't find anything that explains how all of these fit together:

catalog
solution
project
package any others?

Say I just want to build a simple test using SSIS, import a file into a table, which of the above are required, and in what order do I create them?

Also, do you have any good links to basic SSIS 2012 ETL overviews and/or how to articles.

LDAP in SQL Server 2012

Posted: 13 Jul 2013 04:58 PM PDT

In our current system, we query the LDAP using SQL Server 2008R2 but are moving to SQL Server 2012 where we have hit a snag. According to http://msdn.microsoft.com/en-us/library/cc707782.aspx one of the discontinued features of SQL Server 2012 is the AD helper service.

Does anyone know of a way to query the AD from SQL Server 2012?

Percona Xtrabackup on Suse EL 10 -- Failing

Posted: 13 Jul 2013 05:58 AM PDT

I have evaluated Percona Xtrabackup successfully and it works like a charm on my several CentOS / RHEL servers! Hats off to the team!

But my problem starts when I tried to install this on one of my client's Suse EL 10 server. I believe Xtrabackup is not natively supported on Suse. But being an RPM based platform, I tried installing RHEL5 rpms from percona site. But they don't work. They install (a fairly older version 1.5.x) but when run they throw the error like

Command I am running : /usr/bin/innobackupex --ibbackup=xtrabackup --defaults-file=/etc/my.cnf --user=some_user --password=password /root/backups

Error shown is : innobackupex: fatal error: no 'mysqld' group in MySQL options

I have made sure that my.cnf exist, it contains all necessary parametes like datadir / log files related parameters.

See if you can help me with this and let me know if you need any more help / logs that I can provide.

MySQL replication - slave update

Posted: 13 Jul 2013 03:58 AM PDT

I have a master slave setup of MySQL and my question is that if I make any changes in the slave database:

1. Will it mess up the sync in any way  2. Will the changes get overwritten from the master during the next replication event   3. Can I make above (2) option happen ?  

Fulltext search limiting users possibilities

Posted: 13 Jul 2013 07:57 PM PDT

We have a web site with a search box. The search uses a fulltext index column. However, we just cant feed the text the user has input into this stored procedure.

CREATE PROCEDURE [dbo].[SearchPages]      @Term varchar(200)  AS  BEGIN      SELECT pc.SearchData, from PageContent pc      where contains(pc.SearchData, @Term)  END  

Searches with space in them fails, also there is a pletora of sql functions we do not want to expose to the users like

NEAR((bike,control), 10, TRUE)  

and the like or binary operators like AND or OR.

So we need to escape the term in some way.

One way that immediately comes in mind is to put an AND between every word. However, it doesnt feel like thats a good solution. It reminds me to much about 1998 style coding.

So is there any better suggestions?

Having too many connection problem

Posted: 13 Jul 2013 02:06 AM PDT

We have an magento store and having 9K products and 2 store views the trouble is we are getting too many conection error while the site is in normal loads.

1) We have 100 connectoin open in mysql.
2) We have PhP/apache/Mysql on the same box with 16Gb Ram Quad Core processor.
3) A cron run on each 5 mins for submitting the feed to google and sync with amazon.


Any idea how we can overcome this problem.

Many thanks

Primary replica set server goes secondary after secondary fails

Posted: 13 Jul 2013 04:26 PM PDT

I have a 2 servers replica set that, after the secondary fails the primary goes into secondary mode while the secondary is in STARTUP2 (recovering). The problem with this is that I can't use the collection stored in that replica set freely, I'm getting errors trying to use the collection:

pymongo.errors.OperationFailure: database error: ReplicaSetMonitor no master found for set: rs2  

Sometimes if I restart the mongod instances, the server rs2-1 is the primary for a while, but after some time (while the secondary is recovering) I see this in the logs of rs2-1 (the primary):

Tue May  7 17:43:40.677 [rsHealthPoll] replSet member XXX.XXX.XXX.XXX:27017 is now in state DOWN  Tue May  7 17:43:40.677 [rsMgr] can't see a majority of the set, relinquishing primary  Tue May  7 17:43:40.682 [rsMgr] replSet relinquishing primary state  Tue May  7 17:43:40.682 [rsMgr] replSet SECONDARY  Tue May  7 17:43:40.682 [rsMgr] replSet closing client sockets after relinquishing primary  

Is there an easy way to make the primary keep being primary after the secondary fails? Am I doing something wrong?

Thanks in advance!

MySQL backup InnoDB

Posted: 13 Jul 2013 12:58 PM PDT

I have a VoIP server running 24x7. At low peak hour at lease 150+ users are connected. My server has MySQL running with InnoDB engine on Windows 2008 platform. I like to take at least 2 times full database backup without shutting down my service.

As per Peter Zaitsev - the founder of percona, mysqldump –single-transaction is not always good.

read here if you are interested

As I'm not a DBA, I like to know in my scenario, which would be best solution to take a database backup?

Thanks,

Strange characters in mysqlbinlog output

Posted: 13 Jul 2013 10:58 AM PDT

Has anyone experienced this? Data replicates fine but when output in mysqlbinlog there are hidden characters that break the input?

  • mysqlbinlog Ver 3.3 for Linux at x86_64
  • mysql 5.5.28 server

Thanks! Julie

12c Grid Agent installation issues - Shell path is incorrect or not defined

Posted: 13 Jul 2013 04:58 AM PDT

I am installing 12c Grid Control agent on my server via Enterprise manager.

The targer server is a UNIX server with AIX OS on it. I add target host manually, enter authentication information, host name, installation directory, etc. and all goes well untill I click "Deploy Agent" button, then I get the following error:

"Shell path is incorrect or not defined.:/bin/bash(SH_PATH),-c(SH_ARGS) on host MyHostName"

and details are :

"Check the property values in the following files in this order, ssPaths_.properties or sPaths.properties or Paths.properties, in "/u02/app/oracle/middleware/oms/oui/prov/resources"

There isn't /u02/app" directory on my target server, only /u01/app/oracle" so I assume the above directory is on the same server as Enterprise manager.

Although I did find a directory where I have files of the format ssPaths_.properties - /u01/app/oracle/product/11.2.0.3/oui/prov/resources

Since my platform ia AIX, I used file ssPaths_aix.properties

On my target server if I run which bash I get /usr/bin/bash and which ssh returns /usr/bin/ssh ahd which sh returns /usr/bin/sh

So I edited the ssPaths_aix.properties file so that SH_PATH=/usr/bin/sh but still getting the same error. I have also tried setting to SH_PATH=/usr/bin/bash but nothing seems to make any difference. I am thinking one possibility would be I am editing wrong file.

Can anyone help?

How to increase fast backup and restore of 500GB database using mysqldump?

Posted: 13 Jul 2013 02:57 AM PDT

There is a database A size of 500GB. Tables in database A contains both MyISAM and INNODB tables. MyISAM tables are master tables and Innodb tables are main transaction tables.

Backup and restore using mysql dump taking quit a long time or days.

  • max_allowed_packet=1G
  • foreign_key_checks=0
  • auto_commit = off

Connecting to a SQL Server database from a Flash program

Posted: 13 Jul 2013 11:58 AM PDT

I currently have the ability to utilize Microsoft SQL Server 2012. I am developing a project with Adobe Flash Builder 4.7.

If I link my database with Adobe Flash Builder is there any additional steps I must take in order to make the database live, or as long as my computer is running will this database be accessible from any device that is utilizing it?

In other words is this a LAN only system or does it automatically make itself available for the programs I link to it?

Oracle Express edition on Ubuntu - control file missing

Posted: 13 Jul 2013 05:58 PM PDT

I have installed the Oracle Express edition on Ubuntu as mentioned here.

I am facing issues when I try to create a sample table.

Started oracle

$ sudo service oracle-xe start   Starting Oracle Database 11g Express Edition instance.  

Started sqlplus

$ sqlplus / as sysdba  

Executed the CREATE command

SQL> CREATE TABLE SAMPLE (ID NUMBER);  CREATE TABLE SAMPLE (ID NUMBER)  *** ERROR at line 1: ORA-01109: database not open**  

After a series of research on web, I tried to shutdown and restart oracle:

Shutdown command

SQL> shutdown  ORA-01507: database not mounted  ORACLE instance shut down.  

Started the oracle instance

SQL> startup    ORACLE instance started.  Total System Global Area  688959488 bytes Fixed Size                   2229688 bytes Variable Size             411044424 bytes Database  Buffers          272629760 bytes Redo Buffers                3055616  bytes ORA-00205: error in identifying control file, check alert log  for more info  

I realized that the control file is missing at /u01/app/oracle/oradata/XE. XE is the DB name.

So I tried to create the control file as follows:

SQL> CREATE CONTROlFILE SET DATABASE XE RESETLOGS;    Control file created.  

Tried to create the sample table again

SQL> CREATE TABLE SAMPLE(ID NUMBER);      CREATE TABLE SAMPLE(ID NUMBER)    ERROR at line 1: ORA-01109: database not open  

So I tried to issue the following command

SQL> ALTER DATABASE OPEN RESETLOGS;  ALTER DATABASE OPEN RESETLOGS    ERROR at line 1:  ORA-01194: file 1 needs more recovery to be consistent**  ORA-01110: data file 1: '/u01/app/oracle/product/11.2.0/xe/dbs/dbs1XE.dbf'  

What should be done next? I am clueless as I am not a database guy.

Note:

Output of

$ lsnrctl services    LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 04-JAN-2013 09:15:37    Copyright (c) 1991, 2011, Oracle.  All rights reserved.    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC_FOR_XE)))    Services Summary...    Service "PLSExtProc" has 1 instance(s).   Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...      Handler(s):        "DEDICATED" established:0 refused:0           LOCAL SERVER  Service "XE" has 1 instance(s).    Instance "XE", status READY, has 1 handler(s) for this service...      Handler(s):        "DEDICATED" established:0 refused:0 state:ready           LOCAL SERVER  The command completed successfully  

How do I minimise logging during a large insert?

Posted: 13 Jul 2013 01:23 PM PDT

During a nightly scheduled task I am creating a large table from scratch using select into and then adding a clustered index. I would like to minimise the amount of logging during the whole process as I have limited bandwidth for log backup shipping.

I'm happy to consider an alternative approach if that would help.

[MS SQL Server] Run DBCC DBREINDEX on all tables and 50 are 10% fragmented and 3 are 100%

[MS SQL Server] Run DBCC DBREINDEX on all tables and 50 are 10% fragmented and 3 are 100%


Run DBCC DBREINDEX on all tables and 50 are 10% fragmented and 3 are 100%

Posted: 12 Jul 2013 06:40 AM PDT

I ran DBCC DBREINDEX on all tables and 50 are 10% or more fragmented and 3 are 100%. Most of the 50 are in the 60% to 90% range.[code="sql"]USE DataWarehouseGOEXEC sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?', ' ', 80)"GO[/code]I know that DBREINDEX is be deprecated.So for the top 10 tables I ran the following:[code="sql"]ALTER INDEX ALL ON Schema.TableNameREBUILD WITH (FILLFACTOR = 80, SORT_IN_TEMPDB = ON, STATISTICS_NORECOMPUTE = ON); [/code]What could be preventing the tables from dropping down to 0%?Some of the tables are Heap. I could see why that would be a problem.Edit: typo

Data Base File Swap

Posted: 12 Jul 2013 11:42 PM PDT

Hi All,We have a huge load scheduled every day during that time we have no access / data for end user to access.We need to plan this to remove the dependency between Load and usage.The plan is to always Load data to a Load DB and Once Load is done, we need to swap out the Loaded to Current.Please need your help to understand the steps and also the limitations on with this approach.Sudhir Nune

Can we Create sql server instance through script?

Posted: 12 Jul 2013 04:19 PM PDT

Hi, I am using SQL server 2008 standard edition. Is there a way to create a new server Instance by a script except the default instance.? Thanks.

Question on uninstalling sql 2008 r2 sp2 CU3

Posted: 12 Jul 2013 11:00 AM PDT

Hi All,I had to uninstall CU3(KB2754552) and the build number before uninstall was 10.50.4266.0..I was thinking the build number would change to 10.50.4000.0 after the uninstall, does anyone know if that is not the case. I still see the same build number as before, I reboot sql service as well, still same.This is sql server 2008r2 64 bit.Http://support.microsoft.com/KB/2754552Regards,SueTons.

Alert When Data Files Have Less Potential Growth than Autogrowth

Posted: 12 Jul 2013 08:04 AM PDT

I have a production database that grows at a rate of approximately 5 GB per week and is 1.2 TB in total size currently. We have a weekly task to grow the files at a rate equal to our expected growth, thus avoiding autogrowth events. But occasionally there is unusual growth and an autogrowth event does occur. We also have the data files configured with maximum size limits to avoid filling storage devices and also the performance impact of jumbo files. Our current max is 250GB / file. I know that's too big but it's better than no limit. :-DOur production DBAs routinely monitor the database to make sure the file has not hit that bad place of Potential Growth = 0 MB. This condition was discovered today.My question is, how can I automatically monitor and alert for the condition [color=#0000FF][b][i]Potential Growth <= 2 * Autogrowth[/i][/b][/color]?I am using Idera SQL Diagnostic Manager, which provides reasonable monitoring, but i have not been able to find a way to alert on this condition in the tool.Thanks in advance,Greg

[SQL 2012] Can't find options to retain partitions in SSAS Tabular Deployment?

[SQL 2012] Can't find options to retain partitions in SSAS Tabular Deployment?


Can't find options to retain partitions in SSAS Tabular Deployment?

Posted: 04 Mar 2013 11:05 AM PST

So I have created a new Tabular cube on 2012, deployed it for a while. And created a few new partitions via SSMS in the server to cover all the data. Now I have made some changes in the project, planning to deploy it to the server, but can NOT find an option to ignore existing partitions (that are not defined in the project itself) I have tried it on a test db, and no matter what i do... the project seems to deploy its definition, and overwrite what's on the server. meaning all the partitions that i created after deploy are WIPED! I have billions of rows of data, so reprocessing all the missing partitions are really not preferable... and managing the partitions in project but not in ssms is also not preferable as we typicall use scripts to add / manage partitions after it goes livewe have found some blog posts about changing .deploymentoptions file... but we don't know what value we should change to (not in BOL).. we changed it to "RetainPartitions" as a test.. but that doesn't workanyone has been through this? thanks

Loading Infobright with SSIS in SQL Server 2012

Posted: 12 Jul 2013 07:54 AM PDT

I am playing around with Infobright but am having some problems with populating it using SSIS.Infobright uses the MySQL 3.5.1 ODBC driver and to external applications appears as if it is MySQL.Using the ADO.NET Destination I can load data from SQL Server into Infobright provided the population mode is set to row-by-row rather than batch.Populating a column store database row by row is s...l...o...w.Has anyone else tried populating Infobright with SSIS and if so what did you do to get it to work in an acceptable manner?

[T-SQL] ALTER SERVER ROLE syntax error

[T-SQL] ALTER SERVER ROLE syntax error


ALTER SERVER ROLE syntax error

Posted: 07 Aug 2012 10:08 PM PDT

HI allI am trying to run the following but I get a syntax near 'ROLE' error, but it looks ok to me, any ideas? I am using SQL 2012 SSMS against a SQL 2008 R2 instance.[code="sql"]CREATE LOGIN [xxx\xxx] FROM WINDOWS WITH DEFAULT_DATABASE=[master], DEFAULT_LANGUAGE=[us_english]GOALTER SERVER ROLE [sysadmin] ADD MEMBER [xxx\xxx]GO[/code]

complex joins retrive the echivalent results

Posted: 12 Jul 2013 07:58 PM PDT

HelloI have a problem on doing an complex join between multiple tables :Here is my tables [code="sql"]GLCMSIDN KMSO KCVA FSCVA---- ---- ---- -----0099 0001 0007 N0099 0001 000D A0099 0003 0007 A0099 0003 000D A0099 0009 0007 A0099 000A 000D AGLMSOIDN KMSO KMSOP KARA COD NOM DNAS DINIV DFINV DFINA FAB UPS UPD KTDM---- ---- ----- ---- -------------------- ------------------------------------------------------------------------------------------------------------------------ -------- -------- -------- -------- ---- ---------------- ------------ ----0099 0003 NULL 01 asdasd asdasdas 00000000 00000000 99999999 99999999 A admin 201307101544 020099 0006 NULL 02 ModuleSoftware2 NormeCondition2 00000000 00000000 99999999 99999999 A admin 201307101214 010099 0007 NULL 01 Impersonaneeee Pendice 00000000 00000000 99999999 99999999 A admin 201307101230 010099 0009 NULL 02 Prodologia 34234234234 00000000 00000000 99999999 99999999 A admin 201307101645 010099 000A NULL 02 ProblemoFiltre BenjGLARAIDN KMSO KMSOP KARA COD NOM DNAS DINIV DFINV DFINA FAB UPS UPD KTDM---- ---- ----- ---- -------------------- ------------------------------------------------------------------------------------------------------------------------ -------- -------- -------- -------- ---- ---------------- ------------ ----0099 0003 NULL 01 asdasd asdasdas 00000000 00000000 99999999 99999999 A admin 201307101544 020099 0006 NULL 02 ModuleSoftware2 NormeCondition2 00000000 00000000 99999999 99999999 A admin 201307101214 010099 0007 NULL 01 Impersonaneeee Pendice 00000000 00000000 99999999 99999999 A admin 201307101230 010099 0009 NULL 02 Prodologia 34234234234 00000000 00000000 99999999 99999999 A admin 201307101645 010099 000A NULL 02 ProblemoFiltre BenjGLPSIIDN KPSI NOM DNAS ORD FAB SIG UPS UPD DESCR COD NTE---- ---- -------------------------------------------------------------------------------- -------- ----------- ---- -------------------- ---------------- ------------ --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------0099 01 Pacchetto 1 00000000 0 A EMEPS1 admin 201306121059 Il pacchetto 1 PS1 sdsdafdsa0099 02 Pacchetto 2 00000000 0 A EMEPS2 admin 201306121116 Questo è il pacchetto 2 PS2 NULLGLTDMIDN KTDM SIG NOM DTDM IMG FAB UPS UPD---- ---- -------------------- -------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---- ---------------- ------------0099 01 MAR1 Marchio 1 Questo è il marchio 1 NULL A admin 2013061210300099 02 MAR2 Marchio 2 Questo è il marchio 2 0x89504E470D0A1A0A0000000D494844520000006000000048080200000086056734000000017352474200AECE1CE90000000467414D410000B18F0BFC6105000000097048597300000EC300000EC301C76FA86400003B2C49444154785E5D7C07785555BAF6997B71140B9D0001D27B4F486F1048EFBD27E7242727E79C9CDE A admin 201306121055GLCVAIDN KCVA KCTR DCVA FSCVA---- ---- ---- -------------------------------------------------------------------------------- -----0099 0001 01 Distribuzione A0099 0002 01 Vendita A0099 0003 02 Energia A0099 0004 02 Acqua A0099 0005 02 Gas A0099 0007 07 aaa A0099 0008 06 v2 A0099 0009 08 aaaa A0099 000A 08 yyyyjjjjj A0099 000B 04 aa A0099 000C 04 bb A0099 000D 06 v3 A0099 000E 06 v4 AGLCTRIDN KCTR DCAT XMUL FSCTR FAP---- ---- -------------------------------------------------------------------------------- ---- ----- ----0099 01 Settore X A C0099 02 Servizio X A C0099 03 EttoreCust1 NULL A C0099 04 EttoreCust2 X A C0099 06 Eme3Prodcat2x X A P0099 07 Eme4Prod NULL A P0099 08 sss X A P0099 0C AAA NULL A P[/code]Here is my sql query which is wrong on ,,AND ( SELECT COUNT(*) ..... '':[code="sql"]DECLARE @kpsi AS VarChar (2) DECLARE @diniv AS VarChar (8) DECLARE @dfina AS VarChar (8) DECLARE @fab AS VarChar (1) DECLARE @idn AS VarChar (4) SET @kpsi = '02' SET @diniv = '00000000' SET @dfina = '99999999' SET @fab = 'A' SET @idn = '0099' select distinct GLPSI.IDN GLMSO_GLPSI_IDN, GLPSI.KPSI GLMSO_GLPSI_KPSI, GLPSI.SIG GLMSO_GLPSI_SIG, GLPSI.NOM GLMSO_GLPSI_NOM, GLPSI.DESCR GLMSO_GLPSI_DESCR, GLPSI.DNAS GLMSO_GLPSI_DNAS, GLPSI.ORD GLMSO_GLPSI_ORD, GLPSI.FAB GLMSO_GLPSI_FAB, GLPSI.NTE GLMSO_GLPSI_NTE, GLPSI.COD GLMSO_GLPSI_COD, GLPSI.UPS GLMSO_GLPSI_UPS, GLPSI.UPD GLMSO_GLPSI_UPD, GLARA.IDN GLMSO_GLARA_IDN, GLARA.KARA GLMSO_GLARA_KARA, GLARA.KPSI GLMSO_GLARA_KPSI, GLARA.SIG GLMSO_GLARA_SIG, GLARA.NOM GLMSO_GLARA_NOM, GLARA.DNAS GLMSO_GLARA_DNAS, GLARA.ORD GLMSO_GLARA_ORD, GLARA.FAB GLMSO_GLARA_FAB, GLARA.UPS GLMSO_GLARA_UPS, GLARA.UPD GLMSO_GLARA_UPD, GLTDM.IDN GLMSO_GLTDM_IDN, GLTDM.KTDM GLMSO_GLTDM_KTDM, GLTDM.SIG GLMSO_GLTDM_SIG, GLTDM.NOM GLMSO_GLTDM_NOM, GLTDM.DTDM GLMSO_GLTDM_DTDM, GLTDM.FAB GLMSO_GLTDM_FAB, GLTDM.UPS GLMSO_GLTDM_UPS, GLTDM.UPD GLMSO_GLTDM_UPD, GLMSO.IDN GLMSO_IDN, GLMSO.KMSO GLMSO_KMSO, GLMSO.KMSOP GLMSO_KMSOP, GLMSO.KARA GLMSO_KARA, GLMSO.COD GLMSO_COD, GLMSO.NOM GLMSO_NOM, GLMSO.DNAS GLMSO_DNAS, GLMSO.DINIV GLMSO_DINIV, GLMSO.DFINV GLMSO_DFINV, GLMSO.DFINA GLMSO_DFINA, GLMSO.FAB GLMSO_FAB, GLMSO.KTDM GLMSO_KTDM, GLMSO.UPS GLMSO_UPS, GLMSO.UPD GLMSO_UPDfrom GLMSO INNER JOIN GLARA ON GLARA.IDN = GLMSO.IDN AND GLARA.KARA = GLMSO.KARA INNER JOIN GLPSI ON GLPSI.IDN = GLARA.IDN AND GLPSI.KPSI = GLARA.KPSI LEFT JOIN GLTDM ON GLTDM.IDN = GLMSO.IDN AND GLTDM.KTDM = GLMSO.KTDM INNER JOIN GLCMS ON GLCMS.IDN = GLMSO.IDN AND GLCMS.KMSO = GLMSO.KMSO INNER JOIN GLCVA ON GLCMS.IDN = GLCVA.IDN AND GLCMS.KCVA = GLCVA.KCVA INNER JOIN GLCTR ON GLCVA.IDN = GLCTR.IDN AND GLCVA.KCTR = GLCTR.KCTR where GLMSO.IDN = @idnAND GLMSO.DINIV >= @diniv AND GLMSO.DFINA <= @dfina AND GLMSO.FAB = @fab AND ( SELECT COUNT(*) FROM (SELECT DISTINCT GL2.KCTR FROM GLCTA GL2 WHERE GL2.IDN = GLMSO.IDN AND GL2.KARA = GLMSO.KARA AND GL2.KCTR in ('06','07') ) qr) = 2ORDER BY GLMSO.COD[/code]The expected output result should be the row with gmso 0003 which has both of KCTR's 06 Eme3Prodcat2x 07 Eme4Prod Can some one pls help

Denormalizing into a grid

Posted: 12 Jul 2013 02:27 AM PDT

I just read the article on using UNPIVOT to normalize data. I noted that it said that UNPIVOT doesn't exactly do the reverse of PIVOT, and that's too bad. I guess what a need is DEUNPIVOT. I have several ways to do this for a limited set of cases, but I'm hoping for something that can be generalized.ie - given this input[code]DECLARE @OrderDetail TABLE ( orderid INT , personname NVARCHAR(32) , productdate DATE , productname VARCHAR(30) )-- Load Sample DataINSERT INTO @OrderDetailVALUES ( 25815983, 'Jasper', '2013-06-20','Employee Daycare 2 Day' )INSERT INTO @OrderDetailVALUES ( 25815983, 'Jasper', '2013-06-21','Employee Daycare 2 Day' )INSERT INTO @OrderDetailVALUES ( 25815983, 'Jasper', '2013-06-25','Employee Daycare 2 Day' )INSERT INTO @OrderDetailVALUES ( 25815983, 'Jasper', '2013-06-26','Employee Daycare 2 Day' )INSERT INTO @OrderDetailVALUES ( 25815983, 'Jasper', '2013-06-27','Employee Daycare 2 Day' )INSERT INTO @OrderDetailVALUES ( 25815983, 'Jasper', '2013-06-28','Employee Daycare 2 Day' )INSERT INTO @OrderDetailVALUES ( 25815983, 'Milo', '2013-06-20','Employee PreSchool 2 Day' )INSERT INTO @OrderDetailVALUES ( 25815983, 'Milo', '2013-06-21','Employee PreSchool 2 Day' )INSERT INTO @OrderDetailVALUES ( 25815983, 'Milo', '2013-06-24','Employee PreSchool 5 Day' )INSERT INTO @OrderDetailVALUES ( 25815983, 'Milo', '2013-06-25','Employee PreSchool 5 Day' )INSERT INTO @OrderDetailVALUES ( 25815983, 'Milo', '2013-06-26','Employee PreSchool 5 Dayy' )INSERT INTO @OrderDetailVALUES ( 25815983, 'Milo', '2013-06-27','Employee PreSchool 5 Day' )INSERT INTO @OrderDetailVALUES ( 25815983, 'Milo', '2013-06-28','Employee PreSchool 5 Day' )[/code]I need this output[code] Jasper Milo6/20/2013 Employee Daycare 2 Day Employee Preschool 2 Day6/21/2013 Employee Daycare 2 Day Employee Preschool 2 Day6/25/2013 Employee Daycare 4 Day Employee Preschool 5 Day6/26/2013 Employee Daycare 4 Day Employee Preschool 5 Day6/27/2013 Employee Daycare 4 Day Employee Preschool 5 Day6/28/2013 Employee Daycare 4 Day Employee Preschool 5 Day[/code]Such that there is a row for each distinct date, a column for each distinct name and the product name at the intersection of each one. Alternatively, they might ask for distinct names on the rows, products in the columns and dates in the intersections, but I suspect that an solution to one will be extendable to the others.

GetDate() 5 Years ago without time formated 'YYYY-MM-DD'

Posted: 23 Jun 2013 03:40 AM PDT

I'm trying to restrict a query to go back five years ago.I have the following query which is not in the format that I need.[code="plain"]SELECT convert(varchar(11), DateAdd(yy, -5, GetDate()))[/code]Returns:[code="plain"]Jun 23 2008[/code]What I need is the following:[code="plain"]'2008-06-23'[/code]Thank you.

Running Dynamic SQL

Posted: 30 Jun 2013 04:38 AM PDT

Hi All, I have in the past created dynamic SQL in a tally table and looped through to execute it. This is probably not best practice but what alternatives are there, apart from cursors and/or while loops that could be used in this instance?Thanks for your thoughts

[SQL Server 2008 issues] Where is Left table in Left Join ???

[SQL Server 2008 issues] Where is Left table in Left Join ???


Where is Left table in Left Join ???

Posted: 22 Jun 2013 05:45 PM PDT

Hi all, I did my home work but just want to confirm where is that Left table on Left SQL :a. Very Firs mentioned in Selectb. on the Left side of equation in ON statement.Is this the same:Select T1.C1, T2.C2 From T1 Left join T2 on T1.C1 =. T2.C2vs.Select T1.C1, T2.C2 From T1 Left join T2 on T2.C2 = T1.C1. ------ note t2 first hereWhat if I do lef5 join on tables which are not on main select? In this case order could be determined from "=" line ?TxMario

removing linkedservers

Posted: 12 Jul 2013 02:13 PM PDT

Hi all,I have to remove some linkedservers which connects to oracle and pull data and I need to remove some of them. What is the proper way to remove these?Do I need to check before I drop or delete them? Please advise.Thanks,SueTons.

Connection problems?

Posted: 12 Jul 2013 07:30 AM PDT

We are having some connection issues with several items, but I am going to post just one and hope that it puts me on the right path.I am running a query that pulls data from a couple of linked servers. It has always ran fine but is now returning this:OLE DB provider "SQLNCLI10" for linked server "mainserver" returned message "Protocol error in TDS stream".OLE DB provider "SQLNCLI10" for linked server "mainserver" returned message "Communication link failure".Msg 65535, Level 16, State 1, Line 0Session Provider: Physical connection is not usable [xFFFFFFFF]. OLE DB provider "SQLNCLI10" for linked server "mainserver" returned message "Communication link failure".Msg 65535, Level 16, State 1, Line 0Session Provider: Physical connection is not usable [xFFFFFFFF]. OLE DB provider "SQLNCLI10" for linked server "mainserver" returned message "Communication link failure".Msg 10054, Level 16, State 1, Line 0TCP Provider: An existing connection was forcibly closed by the remote host.I started with saying that there is a network error, but the network people say that the network is fine (of course) and that it is an authentication problem. I don't see how. I am trying to find something solid to take back to the network people.Any help is appreciated.

Transfer The Logins and The Passwords Between Instances of SQL Server

Posted: 30 Jun 2013 11:09 AM PDT

we want to Transfer The Logins and The Passwords Between Instances of SQL Server 2008we have 78 logins and 78 database userswhat are the clear step by step of going by moving all the logins and users and fixing orphan users in concise manner thanks

handling errors in SSIS 2008

Posted: 12 Jul 2013 07:50 AM PDT

Hi every oneI'm working with SQLServer Integration Service 2008I have some questions about handling fails in SSIS:I have a package containing a data flow task, wich has some data flow sources, transformation, and destination, but without handling error the task stops displaying the error cause in the output window, the error message is very clear, like this: Hresult: 0x80004005 Description: "Violation of UNIQUE KEY constraint ...."When I try to handle the fails for logging messages, I try to capture manipulating "Oledb destination error output" looking for that error, but that message is hidden and always returns this ErrorCode -1071607685 with the message "Not status available", and the columnNumber is 0. That is the same for any sql server error occurs.I need to capture sql server errors from ssis package, code and message, not ssis package errors, this is for logging that messages in a database.Thanks for any suggestion.Jose Obregon

SSIS load?

Posted: 12 Jul 2013 06:42 AM PDT

When setting up a new SQL Server 2008 R2 is there any advantage to installing SSIS on a separate server to offload CPU and Memory from the SQL Server? I'm thinking that SSIS service doesn't do much other than manage the packages and monitor the execution of them and the actual ETL SSIS jobs where ever they get run against is where all the processing takes place. But my developers think otherwise. Please adviseThanks

Sending mail from On Prem server to Off prem exchange

Posted: 12 Jul 2013 05:20 AM PDT

Im having some troubles getting my SQL mail setup using an off premise exchange with office 365. I can get the server to send through googles smtp but cant get it to work with office365. Has anyone gotten this type of setup to work ? Ive found through searching that i may have to setup an SMTP server locally to route to the online exchange ?? Below is a link to a description.[url=http://blogs.technet.com/b/meamcs/archive/2013/02/25/how-to-configure-sql-database-mail-so-send-emails-using-office-365-exchange-online-a-walkthrough.aspx][/url]

SSIS to the rescue, Replication, or overkill?

Posted: 12 Jul 2013 12:38 AM PDT

Question - I need to pull data from our internal SQL Server Database, create a few derived tables which are truncated and repopulated daily, and get this data out to our public web site in a set of identical tables. The tables on the web site match identically in design to the tables which are derived on our internal servers. So the daily flow is this:(1) Multiple intenral tables >> Views & Stor Procs >> (2) Truncate and Populate Derived Tables >> (3) Get data out to web site in matching tables. The method I'm using to populate the web site tables is I am deleting all the records in these dummy tables and Importing the data through SSMS Wizards. So far, so good. This is beta web site and it is only day 2 of this process so it's still all new. I can easily do this through SSMS because I can [b]connect[/b] to both our interal database and the one residing on the web hosting servers. So my question is this - although I haven't tried yet, wouldn't it make sense just to eliminate the Derived tables I am populating daily in step 2 and just get that data out to the web site to THOSE tables directly? As long as SSMS is connected to both SQL Server databases, this should be possible through a Stored Procedure right since I can do it manually through Import/Export Wizards.If this is true the I suppose it's just a matter of syntax in the Stored Proc right??Thoughts? Advice?

SSIS in cluster

Posted: 12 Jul 2013 03:53 AM PDT

I have two nodes nodeA and nodeB on a sql server on a fail over clustered environment. Each of these nodes have two sql server databases that I am interested in. I need to deploy SSIS packages to this cluster environment using XML configuration and file system deployment. Now in the config file i am not quite sure which node names(server names) i should use because the nodes name should switch when a failure occurs? Please advise.

System databases backup retention policy

Posted: 12 Jul 2013 12:40 AM PDT

HiMy question is rather a general one and i appreciate there will be a difference in the answers depending on peoples environments.My questions is "How long should you keep your system database backups if you are backing them up once a week ?"On our servers we are keeping them for a 5 month period - i think this is too much as if there was a failure of the master database surely we would just use the most recent backup. Im thinking of changing the retention to just 4 weeks and saving some disc space !!

completely uninstall SQL 2008

Posted: 12 Jul 2013 01:51 AM PDT

I am trying to completely uninstall sql server 2008 from my computer.i removed all files from control panel,removed directories but its still there

Impact of DBCC UPDATEUSAGE (0)

Posted: 03 Jul 2013 08:17 PM PDT

We got the advice to run DBCC UPDATEUSAGE, on one of our databases.What impact can be expected from this command?[b]Can this be done online?[/b]Does this impact the servers performance?This is for a 2005 system, databasesize is 40 Gb.On a 2008 system a comparable database of 10 Gb was done in 10 secs.On a 2008 R2 system (small) a 200 Gb database was done in just under 6 minutes.Test for a 2005 system on the backup of the database is in preparation.thanks for your time and attention,Ben Brugman

Fail Over Clustering Question

Posted: 11 Jul 2013 11:39 PM PDT

I will build up to the question...If we have a two node Active/Active fail over cluster set up like the following:Node1:Server Name - CLSQL01SQL VCO - SQLBox1Instance Name - SQLBox01Database - DB1Node2:Server Name - CLSQL02SQL VCO - SQLBox2Instance Name - SQLBox02Database - DB2Our Connection Strings in the web.configs would like this:Provider=SQLOLEDB;Server=SQLBox1\SQLBox01;Database=DB1;Uid=User1;Pwd=Password123;ANDProvider=SQLOLEDB;Server=SQLBox2\SQLBox02;Database=DB2;Uid=User2;Pwd=Password321;In the situation of a failure, it is my understanding that the whole instance will fail over to the other node, be it 1 database or 10 databases. Using the information above, if SQLBox2\SQLBox02 has a failure and and that instance fails over to Node 1, that should change the second connection string to the following:Provider=SQLOLEDB;Server=SQLBox1\SQLBox02;Database=DB2;Uid=User2;Pwd=Password321;If this were to happen, it is my understanding that all of this automatic, but how does the web.config know that there was a failure and needs to change its connection string? My understanding has been that there will be a "Cluster Name" (for lack of better of term) that will be in the connection string that knows of both SQL Instance's and in case of a failure the web.config does not care what Node it is running on.In the above situation, how does the application/web.config know to change the server portion of the connection string?Thanks in Advance for your Help!

Help in BCP load failure.

Posted: 12 Jul 2013 12:38 AM PDT

We are loading a flat file ( Pipe delimited ) , till today if file had any issues , load failed with error and NO record(S) loaded to processing table. But today we had a BCP failure (due to right truncation error , file has 26 records) and just one record loaded and errors logged in log file. Just wondering how one record loaded when BCP file load failed. My understanding is BCP command will not load a single record if file had any issues ( format / length of columns is greater than specified column size in destination table ) , Can some one help here?

SQL Server Monitoring Tools: ManageEngine Free SQL Health Monitor 1.0

Posted: 11 Jul 2013 11:58 PM PDT

Has anyone used ManageEngines free monitoring tools for SQL Server?Free SQL Health Monitor 1.0If you have worked with it could you please share your opinion regarding this software.Thank you.Mark G.

SQL server windows authentication through network

Posted: 11 Jul 2013 11:22 PM PDT

I am using sql server 2008 r2.There are about 50 pc in my office connected in network.In a sql server 2008 r2 express(installed in windows 8) i have created all users windows account with same name as their name in their pc and then created windows authentication user in sql. Now all the users are able to connect to sql server using windows authentication. but now i am trying to do same for the another sql server 2008 r2 express which is installed in windows xp sp3. But it is not working when i try to connect to sql server using windows authentication from network pc message comes like "Login failed for user 'PC91\Guest'. " It is not recognizing the windows account of my pc. why it happens? Please tell me a solution.

SQL Agent job fails for powershell script

Posted: 11 Jul 2013 08:09 PM PDT

Hi everyone.I have a powershell script that outputs to .xlsx file.Works fine in Powershell, no issues.However when I try to schedule it as an SQL agent job it fail.Agent account running with permissions to dir where .xlsx to be saved.Script does shut down Excel when completed. This is the error.The error information returned by PowerShell is: 'Exception calling "SaveAs" with "1" argument(s): "SaveAs method of Workbook class failed"Had a look around but can't seem to find a solution. Any adivse gratefully recieved.cheers

SQL Server trigger on nonupdateable, non indexed views

Posted: 11 Jul 2013 10:39 PM PDT

Hi,so here is my problem.I have a two databases that are physically separated - one is in the US and one in Europe.In the US database I have 3 tables that, via a query, produce data for 1 table in the Europe database. The Europe database table is not updateable.I would like that when one of the fields in those 3 tables in the US get updated, inserted or deleted, the change is automatically reflected in the Europe database.I have been informed that I cannot use replication.I can create a view in the European database which shows me exactly the content the US tables in the exact way that I want them to appear on the table however I cannot get the program to use the view because it would be very slow to search. Hence I came to the conclusion that I need a trigger that updates the table whenever the content in the view is changed.I cannot use indexed views as this is not in the same database system. I also tried using an instead of trigger but it is not firing off.Can someone let me know what next to try?Thanks,N

Friday, July 12, 2013

[SQL Server] Total # of Schema

[SQL Server] Total # of Schema


Total # of Schema

Posted: 12 Jul 2013 02:03 AM PDT

Hi,How can I calculate total # of schema's present in a Database?

Search This Blog