Monday, June 10, 2013

[SQL Server Data Warehousing] SSAS Cube not taking the data


SSAS Cube not taking the data



Please let us know if


1. You are getting any errors


2. How frequently is data updated in your view and for how long you are using the cube?


3. Have u scheduled for process of cube regularly



.

social.technet.microsoft.com/Forums

[SQL Server Data Warehousing] Collecting data from multiple sites


Collecting data from multiple sites



Your question has two different parts;


1- archiving existing databases


2- Integrating ,cleansing, de-duplicating (maybe), data warehousing, ETL , Cube


for the first part there are many articles published for archiving SQL Server databases. for example:


http://www.mssqltips.com/sqlservertip/1121/archiving-data-in-sql-server/


I am going to answer the second part of your question, which is more related to Data Warehousing and BI;


First of all you need to integrate data from those several hundred locations into an integrated database (this can be called as integrating or staging area). SSIS is good data transfer tool which helps to a lot in transferring data from those sources into the integrated database. you can use different methods for transferring data in SSIS, use FTP, exported CSV files, connect to SQL Server databases (if live connection exists), web services ,....


You will also need to design the data warehouse, Kimball is one of the most common methodologies of designing data warehouse based on business requirements. you can use The Data Warehouse Toolkit book from Kimball to learn about Dimensional Modeling.


After designing the data warehouse you will require an ETL (Extract Transform Tool) to get data from the integrated database (or staging) into the data warehouse. SSIS again is very powerful ETL tool. it will provide massive amount of data transformation components for using in ETL process.


For data cleansing; I recommend using Data Quality Services (DQS) which is part of SQL Server 2012 services. with this service you can create knowledge bases for different business domains, and somebody from business will play role of Data Steward to maintain that knowledge base, In SSIS there is a DQS Cleansing component which talks with DQS and apply data quality rules of knowledge bases to incoming data stream in ETL process, and as a result you will have more cleansed data.


For data consistency and governance; The best way to keep data consistency is a master data management tool, Microsoft tool for MDM is Master Data Services (MDS). this tool is part of SQL Server installation as well, you will create data models with entities, relationship between entities, and business rules. MDS provides only ONE SINGLE VERSION OF TRUTH for your data structure, this means that you can have only single version of customer information. or only single version of product info. this will help data consistency.


Finally for working with OLAP Cubes; you can use Analysis Services (SSAS) to build olap cubes on your data warehouse.


and if you want to create reports and dashboards from your cube; there are many data visualization tools in Microsoft stack, such as SSRS, PerformancePoint, Power View, Excel and Excel Services



http://www.rad.pasfu.com



.

social.technet.microsoft.com/Forums

[SQL Server Data Warehousing] Change Data Capture - Pros/Cons/Gotchas


Change Data Capture - Pros/Cons/Gotchas



My company is thinking of implementing Change Data Capture. I find many online articles, which cover the basics of Change Data Capture, but no real-life examples of using it, the pros and cons, limitations or "gotchas" encountered during actual implementation and usage, etc. I could use such info as we determine whether Change Data Capture is the right solution for our needs.


Some details about our scenario:


1. Source data: We have currently about 200 entities that use one of our applications. The entities involved are all functioning units of a larger, single business entity, but are spread out geographically. Each entity and its copy of the application stores its data in a separate database, although all copies of the database have identical data structures and tables. They post summarative data on a routine basis to a couple of main servers, however, in each case this data is still stored in separate database on the servers - thus, still about 200 databases.
2. What we would like to do: Bring together this summarative data into a single relational data store, and from there into a dimensional database. We do not want to, nor can we, eliminate the source databases, on either the geographically-distributed sites or the main servers, b/c we don't "own" them.


Is this a feasible scenario for Change Data Capture? We would need to include data column(s) to indicate data sources b/c the data, although placed in a single database, must still be identifiable by data source and data from one source never overwrites data from a different source. As I understand it, Change Data Capture would need to be enabled on the 200+ databases on the main servers, from which we would then likely use SSIS to process the data to post it to the new database we would develop.


I would appreciate any advice, based on understanding and experience, of important things to consider, things to avoid, "gotchas", best practices, etc. We have worked quite a bit with SSIS, developing dimensional data structures, etc., so although any major comments there are welcomed, our primary "need to know" is how/if Change Data Capture can be used for our scenario, as we have no experience with it.

Thanks for your assistance,
Randy



.

social.technet.microsoft.com/Forums

[SQL Server] Delay in Print

[SQL Server] Delay in Print


Delay in Print

Posted: 09 Jun 2013 03:32 PM PDT

Dear AllI have procedure A,B,C. Procedure A calls B and C. Procedure B and C have loops and print statement in the loop (for me to understand whats happning). These procedure also have SEELCT (since its under testing phase all these selet and print) before loop.But I have noticed that thouch the prints of B are not completly displayed it runs the Procedure C and displays SELECT result of CAnd after some time it displayes remaining prints of B.I am not able to understand this behaviour. As per my knowledge everything of B should get over and then only it should start with C.Can anybody please explain me the reason of this behaviourRegards

SSRS / IsNothing problems

Posted: 10 Jun 2013 08:57 AM PDT

I'm trying to write an expression in SSRS to allow for different fields to display based on the result set. This example is for the Title line, and obviously it would be adjusted for the row containing the data. [u]Objective:[/u]if SectionCategory is null and SectionType is null, then nothing;if SectionCategory is null and SectionType is not null, then "Type"'if SectionCategory is not null, then "Category"[u]Expression:[/u]=switch( Fields!SectionCategory.Value is nothing and Fields!SectionType.Value is nothing, nothing, iif (Fields!SectionCategory.Value is nothing and (IsNothing(Fields!SectionType.Value)="No"), "Type", nothing), iif ((IsNothing(Fields!SectionCategory.Value)="No"), "Category", nothing) )Is it possible to nest IIF statements in a Switch statement? I've experimented several different ways and I'm getting "#Error" in Preview.

SQL Connection

Posted: 02 May 2013 05:40 AM PDT

I just had sql server 2012 installed on my local machine but after typing my code I clicked on the execute button to test it but I am not connected to any db server. I'm sure this is a dumb question but I'm a noob..... dont I have to be connected to a db in order to test what I create or build a new db?

Choosing the correct driver

Posted: 10 Jun 2013 12:09 AM PDT

I'm writing some documentation on how to connect Access to SQL via ODBC. I honestly always just choose "11.0" when it comes to drivers, just because. But I'm wondering what determines the driver I "should" be using? Does the version of SQL Server determine this? And I'm assuming the "User's" computer won't have all the driver's I'm seeing here so can someone just clarify the rules to choosing the correct driver?[img]http://i.stack.imgur.com/pwpjd.png[/img]*I also asked this on StackOverflow, so if you're a user of that as well and come up with an answer, feel free to do it there as well so I can give you some points :)

NOCHECK CONSTRAIN

Posted: 09 Jun 2013 03:05 PM PDT

Dear AllThere is hardly any data in the table on which i have defined foreign key. But when i am setting NOCHECK constrian it taking long time / not getting over.What may be the reason?Regards

Inserted & deleted table

Posted: 09 Jun 2013 10:04 PM PDT

Hi,I have confused with below query . Could anyone please help me....Client have sent me one view query , there he put one condition----[b]create and replace view vu_name(column1, column2,------) from table name where isnull(table name.deleted).[/b]I want to know what does that condition means "where isnull(table name.deleted)".Thanks in advance---

Linked Servers (SQL 2005)

Posted: 09 Jun 2013 10:57 PM PDT

So I am a newb.... I have a database system that uses SQL 2005. The server that these systems reside on were migrated to an Active Directory domain. Most of that process involved changing the name of the server from what the vendor named it to what my company wants it called. After migrating the system it functioned well however there is one persistant problem that the vendor and I can not fix. Because of our companies security policy the vendor no longer has remote access so I am doing most of the local support. However sql is not my subject of expertice.My Database is getting an error that tells me {Microsoft}{ODBC SQL Server Driver}{SQL Server} Could not find server 'xxxxxxxx' in sys.servers.It then recomends to run the sp_addlinkedservers proceedure to add the server.Since this was running fine prior to the migration to AD, I beleive the server just needs to be renamed.So I ran the pstored proceedure "sp_linkedservers" to see what server is linked and of course it comes up with an old server name. How can I rename this server"

[how to] SQL-server 2008 TMG access logs required?

[how to] SQL-server 2008 TMG access logs required?


SQL-server 2008 TMG access logs required?

Posted: 10 Jun 2013 08:14 PM PDT

I'm using this guide http://msdn.microsoft.com/en-us/library/cc280424%28v=sql.105%29.aspx

Now, after enabling this, I'm using a jdbc connection from another machine (siem) to pull logs from. This is working fine. But I'm only able to get logs such as alert *insert* etc.

What do i need to bring entire access logs? Do i need to change object schema?

PostgreSQL "freeze"/"unfreeze" command equivalents

Posted: 10 Jun 2013 07:05 PM PDT

In Derby (an embedded database written in Java which is mostly used for testing or prototyping) there are "freeze" and "unfreeze" commands which can be used during an online backup. "Freeze" simply causes all database accesses to block until "unfreeze" is called. This is useful for backing up using an external program, which you might do if the external program is much faster than using Derby's internal backup solution. For my use case, I can take a snapshot almost instantaneously using some built-in filesystem utilities, so it is a constant-time operation (not O(length of DB files)).

I'm migrating an application which has outgrown Derby to PostgreSQL, and I was wondering if there's anything comparable there which I can use to quiesce all connections. Also, I'd prefer to know what my serialization point is from inside my application so that I don't get caught in some awkward state, so being able to pause/resume all other accesses is really nice-to-have for me.

Since PostgreSQL has a transaction log, I could just take a snapshot without "freeze"ing, but the snapshot would need to be run through PostgreSQL's recovery mechanism before I can use it because otherwise what's stored on disk would be the same as if I pulled the plug on a normal filesystem. This solution is not ideal.

EDIT I learned that pg_start_backup() is close, but it doesn't cause incoming transactions to block until a matching call to pg_stop_backup(), forcing me to do a point-in-time-recovery back to the transaction id pg_start_backup() returns from a filesystem snapshot. It would be nice not to have to actually shutdown PostgreSQL to get this (perhaps there is a pseudo-shutdown command that keeps connections open?).

Which hardware you suggest?

Posted: 10 Jun 2013 03:28 PM PDT

I'm completely new to buying a database hosting and I would greatly appreciate your opinion about the following situation. I just created an App and I want to go live this weekend. I want to be prepared as good as possible so I will write some statistics of the database usage I think to see every 24 hours.

- 20 000 000 inserts into a table which contains 1 bigint, 7 ints and 1 date.  -  5 000 000 updates on above table  - 10 000 000 inserts into mapping table 1 bigint 1 int  -  2 000 000 inserts into table with 3 ints  -  5 000 000 select queries with WHERE pk reference of first table described above  

I will use the MySQL 5.5 database with all InnoDB tables. At the moment I am thinking about buying a dedicated server with these stats:

Intel Xeon E3-1220LV2 2.3GHz  100 Mbps Uplink  4 GB DDR3 Memory  500 GB RAID-1 Drives  10 TB Bandwidth  Up to 5 Dedicated IPs  

For a price of $140 dollar a month. What do you think, will this be enough? Would it be overkill? You know a better hosting? At the moment I am on a shared webhost ($12 a month) and I get a fifth of the numbers above, but the shared database is really not optimized for innoDB.

How do I clear log copy queue red gate sql backup? [closed]

Posted: 10 Jun 2013 01:46 PM PDT

I was wondering if anyone knew the steps to take to clear the Log Copy Queue in Red Gate's Sql Monitor software?

pgAdmin3 can't read log

Posted: 10 Jun 2013 06:04 PM PDT

I have pgAdmin3 and PG running on my iMac. If I point to the server and run Tools/Server Status -

I get:

ERROR: could not read directory pg_log: No such file or directory  

The PG log is located in:

macintosh HD/usr/local/var/postgres/server.log  

How do I change pgAdmin3 to read the log?

SQL Server cube - count distinct of dimension key

Posted: 10 Jun 2013 05:35 PM PDT

I have a fact table which has a lot of records with the same person ID. I would like to create a calculated member that count them, but without duplicates of the ID.

I tried to use just count(distinct([Dimension Person].[PersonID].members])) and some expression with crossjoin, that I found in the Internet, but everything gives wrong results of records.

How to count it correctly?

What collation to use to store characters from the Character Map?

Posted: 10 Jun 2013 03:47 PM PDT

I have an SQL Server 2012 database with the default collation of SQL_Latin1_General_CP1_CI_AS. I've noticed that certain characters from the Windows Character Map application namely small letters with circumflex (not all but some) get replaced with their simple latin equivalent without the diacritics mark. That's not OK for me. I'm using the Nvarchar data type but this does not help.

Basically I need the database to mainly focus on English-language data and apply the sorting rules based on that language. But I also need the DB to store all other characters without damaging them. It's OK if sorting would be slightly off, but I must retain the exact characters and be able to perform precise searches with respect to these diacritics marks.

What collation should I use instead?

P.S. Just in case someone implies I haven't done my homework - I've been googling to no avail. Information of collations seems to be either absent or very superficial. I'm not even sure where to start. I've read some MSDN articles too but they were less than helpful. I hope somebody can point me to the right direction.

UPDATE: The problem is narrowed down to a UDF which processes the data. Something goes wrong in it.

  CREATE FUNCTION [dbo].[MyFunc]  (      @Input NVARCHAR (MAX)  )  RETURNS NVARCHAR (MAX)  WITH SCHEMABINDING  AS  BEGIN      DECLARE @Output NVARCHAR (MAX)        SET @Output = @Input         -- replaces with accented letters, this site cannot correctly display them      SET @Output = REPLACE (@Output, 'something', 'a')      SET @Output = REPLACE (@Output, 'somethingelse', 'c')      -- And so on repeated        RETURN @Output  END  

P.S. WITH SCHEMABINDING is requred for a calculated persisted column.

P.S.2. And I've found out that the REPLACE function doesn't work correctly, even if run manually in SSMS. Does anybody know what is wrong with it?

P.S.3. EPILOGUE.

I've found out what was wrong.

I missed the N'' in the call to REPLACE function. Hiding in plain sight, as they say...

SET @Output = REPLACE (@Output, 'something', N'a')

Special thanks to Martin Smith for reminding me of the obvious.

This certain MySQL database organisation

Posted: 10 Jun 2013 08:19 PM PDT

Hypothetically, let's say that I have three tables:

- Products (id, product_name, added_timestamp)  - Individuals (id, name, last_name)  - Companies (id, company_name, company_number, company_address)  

Now, let's say that both individuals and companies can add their products to my virtual shop.

Now, let's say I want to display all the added products sorted from descending order by the Products.added_timestamp column. However, I don't know how to connect Products with Individuals and Companies and display all the data.

How would I approach this problem?

find sold subscriptions from table 1 and store name in table 2, joined by store id. (oracle)

Posted: 10 Jun 2013 12:39 PM PDT

I am trying to get the number of sold subscriptions from one table and the store name from another. The tables have a common column containing a store id, however the columns do not have the same name.

Talbe1                  |   Table2    Amount     store_code   |   store_name    store_no    1          nf1ii        |   Jims kiosk    nf1ii  1          nfoi1        |   Henrys kiosk  nfoi1  ...        ...          |   ...           ...  

I am trying to count the number of subscriptions sold for each store name. How is this doen in Oracle? I am fairly new to SQL in general so please forgive me if the question is phrased strangely.

I tried something like this, however it did not return what i expected (I'm not at work atm. so i wrote this from memory, i.e. it might be terribly wrong).

select count(*)  from Table1 a, Table2 b  where a.store_code = b.store_no  and a.subs = 'twin' /* type of subscription */  and a.date > '01.05.13'  group a.store_code;  

Doe's any one know a simple way to solve the example above? I am grateful for any hint, answer or constructive comment.

Store large hierarchy in NoSQL each day and compare

Posted: 10 Jun 2013 11:36 AM PDT

I'm in the process of creating a utility that stores a fairly large hierarchy into a NoSQL document. The data of this is similar to an Active Directory / LDAP structure which contains sub groupings, etc for each record.

These are fairly large, a JSON representation is about 40MB per copy.

Each day a new copy of this hierarchy would be added as a new document in the collection.

Through a web interface, a user would be able to select any two copies (documents) and they would be compared. The differences between the two would be displayed, sort of like a diff command would display differences between two documents.

My questions are

  • Is NoSQL a good fit for storing a LDAP like structure for just viewing purposes?
  • If so, what sort of techniques are useful for comparing two documents and getting the differences?

I'm currently trying this in MongoDB and I haven't figured out how to do a direct comparison from a MongoDB command to get the differences. So, my attempt so far is to simply export each record as a JSON text file and run a custom diff command to compare and format as needed.

I am curious if someone has a better method for this.

An example document in NoSQL

{      date: "2013-06-10",      users: [          {               _id: 1,               dn: "CN=B A Baracus,OU=Testing,OU=People,DC=my,DC=domain,DC=com",               name: { first: 'B', middle: 'A', last: 'Baracus' },               employeeId: '001234',               domain: 'mydomain',               username: 'babaracus',               created: new Date('Dec 03, 1924'),               modified: new Date('Mar 17, 2007'),               memberOf: [                      "CN=TEST-DeleteMe,OU=Testing,OU=People,DC=my,DC=domain,DC=com",                     "CN=CTX - Epic Prod,OU=Access,OU=Groups,DC=my,DC=domain,DC=com",                     "CN=Public Folder Admins,CN=Users,DC=my,DC=domain,DC=com",                     "CN=Exchange Services,CN=Users,DC=my,DC=domain,DC=com",                     ...etc...               ],              ...etc...          }          ...etc...      ]  }  

How to create concurrent databses, So that when one database is down other can work...?

Posted: 10 Jun 2013 12:21 PM PDT

I have a local application running on LAN. Some times the system having database get shutdown. In such a scenario how to keep application working. Can i have two concurrent database. I am using Postgre db.

Mirrored tables on PostgreSQL

Posted: 10 Jun 2013 11:20 AM PDT

I have some values under "account" and I need to have the same values under "invoice", so, I'm trying to find a way to symlink my tables, so every time one gets updated, the another have the new value too. I'm developing under OpenERP, I can't trow that update directly on code, I need to write the rule using PgAdmin or something like.

I need to junction symlink 2 tables under Postgres.

ACCOUNT                INVOICE    |ID|payment_type|@@@@@@|ID|account_payment_type|    |#1|cc          |@@@@@@|#1|cc                  |    |#2|cash        |@@@@@@|#2|cash                |    |#3|cash        |@@@@@@|#3|cash                |    

Find the salary difference between two months

Posted: 10 Jun 2013 02:01 PM PDT

Suppose we have 3 columns in a table emp_no, salary, month

emp_no    salary    month  123       1000       jan  123       2000       feb  123       3000       march  

So, output should look like:

emp_no    salary    month      diff  123       1000       jan       1000  123       2000       feb       1000  123       3000       march  

Is there a limit to how many credentials SSMS can remember?

Posted: 10 Jun 2013 10:08 AM PDT

Using SQL Server Management Studio that came with SQL Server 2008 R2, I make connections to the same server using a number of different sets of SQL Server Authentication credentials. On my development machine, I use the Remember password checkbox extensively to avoid having to look up the password each time.

When I choose a Login which I have previously connected with (using Remember password), the Password entry field gets populated with asterisks, and clicking Connect allows me in - most of the time. But sometimes, with no pattern that I can discern, on choosing a previously-used Login, the Password field will prepopulate with asterisks, but clicking Connect tells me the login fails. I enter the password, click Connect, and get in, and then typically the next time I use the same login it will work.

My suspicion is that there is a maximum number of credentials that SSMS is prepared to remember, and only the most-recently-used n credentials are actually remembered, but I can't find any mention of such behaviour in the documentation.

Is there a known limit on how many credentials SSMS will remember?

SQL Server giving intermittant connection failures

Posted: 10 Jun 2013 03:02 PM PDT

I have a SQL Server 2000 running on Windows 2003 Server that is giving me intermittent connection failures.

The user connections are getting closed on waiting but mature connections. Meaning when users are connected to an application that is running on this server, the programs start up fine. The problem usually shows up after the connection has been sitting for a bit. I'm wondering if there is some odd setting that is killing older, not active connections?

What could be causing this? What can I check to further troubleshoot this.

delete rows in 3 tables with on delete cascade

Posted: 10 Jun 2013 12:19 PM PDT

For a relation 0..n between two tables (t1,t2), the mpd generates an intermediate table (t3) with two fk.
Let two constraints on the t3 table, each one with "on delete cascade", if I do:

delete from t1 where id1=$id1;  

I want to delete all rows of the t3 which contain the fk id1 and all rows of the t2 which contain the fk id2.
Is it possible in one query ?
Edit: here are my tables:

create T1 (  id1 ...  ...  CONSTRAINT pk_id1 PRIMARY KEY (id1) );    create T2 (  id2 ...   ...  CONSTRAINT pk_id2 PRIMARY KEY (id2) );    create T3 (  id1 ...,  id2 ...,  CONSTRAINT pk_T3 PRIMARY KEY (id1, id2),  CONSTRAINT fk_T3_1 FOREIGN KEY (id2)  REFERENCES T2 (id2) MATCH SIMPLE  ON UPDATE CASCADE ON DELETE CASCADE,  CONSTRAINT fk_T3_2 FOREIGN KEY (id1)  REFERENCES T1(id1) MATCH SIMPLE  ON UPDATE CASCADE ON DELETE CASCADE )  

InnoDB tables don't show after reboot

Posted: 10 Jun 2013 10:58 AM PDT

After reboot each time, I can't see my InnoDB table. However, when use command mode I can see my tables, but cannot access to it.

mysql> show tables;  +----------------------+  | Tables_in_xpl_ticket |  +----------------------+  | active_list          |  | bill_block           |  | bill_block_list      |  | block                |  | block_list           |  | box                  |  | cisco_switch         |  | mac_add              |  | mac_change           |  | month                |  | new_connect          |  | new_user             |  | open                 |  | package_change       |  | paid_list            |  | pay                  |  | problem              |  | re_open              |  | refund               |  |  ticket              |  | user                 |  | user_log             |  +----------------------+  22 rows in set (0.00 sec)  

But when I want to access a table it says table doesn't exist;

mysql> select * from active_list;  ERROR 1146 (42S02): Table 'xpl_ticket.active_list' doesn't exist  

Edit by RolandoMySQLDBA

Please run the following in MySQL:

SELECT VERSION();  SELECT CONCAT('[',table_name,']') tbl_name  FROM information_schema.tables WHERE table_schema='xpl_ticket';  

Please run the following in the OS:

cd /var/lib/mysql/xpl_ticket  ls -l  

disk I/O error in SQLite

Posted: 10 Jun 2013 10:51 AM PDT

What are the possible things that would trigger the "disk I/O error"? I've been having this problem and I couldn't find a solution. I have a SQLite3 database, and I'm trying to insert data from a file that contains SQL inserts.

Sample data in the file:

insert into files (filesize, filedate, md5, fullpath, origin) values (5795096,1370159412, "e846355215bbb9bf5f30102a49304ef1", "SDs/16G-1/DSC00144.JPG", "SDs");   insert into files (filesize, filedate, md5, fullpath, origin) values (5435597,1370159422, "1a7bcf3a4aaee3e8fdb304ab995ff80f", "SDs/16G-1/DSC00145.JPG", "SDs");  insert into files (filesize, filedate, md5, fullpath, origin) values (5121224,1370159432, "16d28e83599c731657a6cd7ff97a4903", "SDs/16G-1/DSC00146.JPG", "SDs");  

I tried inserting that in the db file with the following command:

$ sqlite3 allfiles.db < insert.sql  

See below the error that I get:

Error: near line 27: disk I/O error  Error: near line 28: disk I/O error  Error: near line 34: disk I/O error  Error: near line 39: disk I/O error  Error: near line 47: disk I/O error  Error: near line 129: disk I/O error  

The input lines that don't generate error are successfully included, but I don't understand why some lines have errors, and they are not inserted into the DB. There's nothing special in the lines with error, and if I run the command again I get errors in different lines, which means it's random (not related to the data itself). I tried adding pragma syncrhonous = off; and pragma temp_store = memory;, to no success. I'm running that on a lubuntu, which runs in a VirtualBox virtual machine. The host machine is a windows 7. The pwd of the files is a shared folder, i.e., it's a folder in the host machine. If I run it in a "local folder" in the guest machine, the error doesn't happen, although for some reason it's much slower... In any case, I'd like to know about the I/O error.

BULK INSERT in SQL

Posted: 10 Jun 2013 08:14 PM PDT

I have a text file , entries of which i want to insert into a table in my DB.

My query looks like this:

BULK INSERT myDepartment FROM 'C:\myDepartment-c-t.txt'  WITH (    DATAFILETYPE = 'char',    FIELDTERMINATOR = '\t', -- default terminator    ROWTERMINATOR = '\n'  );  

On executing it,my SQL management studio 2008,gives the following error :

'BULK' rowset provider not supported in this version of SQL Server.  

I googled it and found that SQl server 2008, does support BULK INSERT.Can anyone tell me what am i missing?

It's urgent,plz help .

Postgres Problems with Phusion Deployment (Rails)

Posted: 10 Jun 2013 01:47 PM PDT

FATAL:  Ident authentication failed for user "test-postgres"  

So I have a postgres server hosting a side of my database access for my Rails application and I have a database for production set up and a user for production set up but I keep getting the previous error whenever I try to view my Rails application from a browser. Can somebody help me out? I have the postgres user created and the database created already...

Troubleshooting high CPU usage from postgres and postmaster services?

Posted: 10 Jun 2013 05:59 PM PDT

I'm using an open source (RHEL 6.2) based machine running SIEM software. When I run the top command, I see postgres and postmaster both with 96% CPU usage. Is there a way to pin-point or see what causing these service to stack up?

Is it safe to remove an entry in a "config.mongos" MongoDB collection?

Posted: 10 Jun 2013 01:08 PM PDT

There are old entries in my "config.mongos"; they reference hosts which previously had "mongos" running on them but no longer do.

Can I safely remove those entries?

e.g.,:  # Old entries  { "_id" : "silicon:27117", "ping" : ISODate("2012-12-27T19:51:04.560Z"), "up" : 9301, "waiting" : true }  ...  # Current entries  { "_id" : "app1:27017", "ping" : ISODate("2013-06-07T17:12:02.856Z"), "up" : 5593404, "waiting" : true }  { "_id" : "app2:27017", "ping" : ISODate("2013-06-07T16:17:49.812Z"), "up" : 5590271, "waiting" : false }  { "_id" : "app4:27017", "ping" : ISODate("2013-06-07T17:12:02.110Z"), "up" : 5593413, "waiting" : true }  { "_id" : "app3:27017", "ping" : ISODate("2013-06-07T17:12:02.854Z"), "up" : 5593578, "waiting" : true }  

Update:

One reason why having these old entries in config.mongos is a problem is that making changes to the shard takes a long time, while we wait for old mongoses to time out.

For example, stopping the balancer takes a long time (several minutes):

mongos> sh.stopBalancer()  Waiting for active hosts...  Waiting for active host magnesium:27117 to recognize new settings... (ping : Thu Dec 22 2011 03:08:14 GMT+0000 (UTC))  Waited for active ping to change for host magnesium:27117, a migration may be in progress or the host may be down.  Waiting for active host Boron:27017 to recognize new settings... (ping : Tue Jan 31 2012 21:48:12 GMT+0000 (UTC))  Waited for active ping to change for host Boron:27017, a migration may be in progress or the host may be down.  Waiting for active host Beryllium:27017 to recognize new settings... (ping : Wed Feb 01 2012 17:34:15 GMT+0000 (UTC))  Waited for active ping to change for host Beryllium:27017, a migration may be in progress or the host may be down.  Waiting for active host helium:27017 to recognize new settings... (ping : Mon Jun 10 2013 19:07:41 GMT+0000 (UTC))  Waited for active ping to change for host helium:27017, a migration may be in progress or the host may be down.  Waiting for active host boron:27017 to recognize new settings... (ping : Sun Jun 02 2013 08:46:57 GMT+0000 (UTC))  Waited for active ping to change for host boron:27017, a migration may be in progress or the host may be down.  Waiting for the balancer lock...  

Does MySQL cache queries?

Posted: 10 Jun 2013 08:19 PM PDT

I'm interfacing a MySQL database with PDO and executing an extensive SQL query. Normally, it takes about 1500ms; I still need to optimize it. When I run the PHP script twice with a short interval in between, the query only takes about 90ms. The query is in both cases the same. When I run the script, with the same query, after some time again, it takes 1500ms again.

Why is that? Does the database cache automatically? Is there some time the database saves the cache and then automatically deletes it?

I assume the results can't be cached by PHP because this happens in two different threads. I wouldn't think PHP would cache the results because it can't know if the database has changed.

I have a script running every minute to insert new rows to the database. This might also be the cause that it takes 1500ms again after some time; the cache would've been deleted because the relevant tables aren't the same anymore.

how to build table relationships in a firebird database?

Posted: 10 Jun 2013 11:52 AM PDT

I am a beginner in Delphi and I have a Firebird database with 2 tables namely masterlist and daily collection. I used Zeos 7.0.3 to access my Firebird database.

My masterlist contains the following columns:

╔══════╦══════╦═════════╦════════╗  ║ name ║ date ║ balance ║ status ║  ╚══════╩══════╩═════════╩════════╝  

My daily collection contains the following columns:

╔══════╦══════╦═════════╦═════════╗  ║ date ║ name ║ payment ║ balance ║  ╚══════╩══════╩═════════╩═════════╝  

I would like to build a relation in which the balance from masterlist will be copied to the balance column of the daily collection, and when I update the column in the daily collection it will also update the content of the masterlist.

Hope this will be considered a good question I have tried very hard to make a useful question.

dead lock when updating

Posted: 10 Jun 2013 11:25 AM PDT

 Update Operation.TrTable   Set    RecordId = RecordTABLE.newRecordId   From  Operation.TrTable tr      Inner Join (          SELECT r.Id AS newRecordId, r.KeyM AS MappingKey          From  Operation.Record r          WHERE r.KeyM > 0        ) RecordTABLE        ON RecordTABLE.MappingKey = tr.KeyM   WHERE tr.KeyM > 0          UPDATE Operation.Record SET KeyM = 0   WHERE KeyM > 0     UPDATE Operation.TrTable SET KeyM = 0  WHERE KeyM > 0  

The error is:

Transaction (Process ID 93) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

Creating the MySQL slow query log file

Posted: 10 Jun 2013 10:25 AM PDT

What do I need to do to generate the slow logs file in MySQL?

I did:

 log_slow_queries  = C:\Program Files\MySQL\MySQL Server 5.1\mysql-slow.log    long_query_time  = 1   

What more do I need to do to?

DB2 Server Table Space Locked

Posted: 10 Jun 2013 06:25 PM PDT

At work we keep receiving the following DataException seemingly at random when one of our processes tries to write/access a table for one of our clients:

com.ibm.db.DataException: A database manager error occurred. :   [IBM][CLI Driver][DB2/NT64] SQL0290N  Table space access is not allowed.  SQLSTATE=55039  

Has anyone encountered this? I'm not the person who primarily does administrative tasks on our databases, but even they seem to be having difficulty finding the root of this problem. Any suggestions? Anyone encounter this before? This error comes up for only one of our clients at a time, and it generally seems to rotate. We have Rackspace service but they wont be of much help unless we can provide screenshots, etc at the exact moment this occurs.

Apologies if this post may be too vague, please let me know what information to supply to clarify things more. I'm one of the developers in my office, but I don't primarily handle the administrative tasks on our databases.

edit: We spoke with IBM and this could possibly be caused by some sort of virus scan being run by IBM/Rackspace as a part of maintenance? They said this kind of dubiously though, so I am doubting this is the culprit because tables remained locked for variable amounts of time.

unable to login oracle as sysdba

Posted: 10 Jun 2013 07:25 PM PDT

I just got 11gR2 running and was able to conn as sysdba. I shutdown and started up the database to mount a new pfile. Now, I cannot login as sysdba. My parameter for password file is:

 *.remote_login_passwordfile='EXCLUSIVE'  

I am using sqlplus within the server. This is not a remote connection.

[oracle@oel56 ~]$ sqlplus /nolog    SQL*Plus: Release 11.2.0.1.0 Production on Tue Feb 5 22:50:46 2013    Copyright (c) 1982, 2009, Oracle.  All rights reserved.    SQL> conn / as sysdba  ERROR:  ORA-01031: insufficient privileges  

Here's some more information:

[oracle@oel56 ~]$ grep -E "ine SS_DBA|ine SS_OPER" $ORACLE_HOME/rdbms/lib/config.c  #define SS_DBA_GRP "oracle"  #define SS_OPER_GRP "oracle"  [oracle@oel56 ~]$ id oracle  uid=500(oracle) gid=500(oracle) groups=500(oracle),54321(oinstall),54322(dba),54323(oper) context=user_u:system_r:unconfined_t  

"connectivity libraries needed are not found" error in IBM Data Studio

Posted: 10 Jun 2013 03:25 PM PDT

UPDATE

I am getting the following error when I try to create a new database in IBM Data Studio v3.1.1.0.

The connectivity libraries that are needed for local or remote non-JDBC operations were not found. To provide these libraries, you can install IBM data server client or a local DB2 server instance.  

I have already started the instance using

db2start  

command.

After searching exhaustively, I am not able to find any help on the internet regarding this error.

MySql - changing innodb_file_per_table for a live db

Posted: 10 Jun 2013 02:59 PM PDT

I have a large MySql DB (150GB) and only now i've noticed that the innodb_file_per_table is set to off which cause the entire DB to be hosted on one single file (ibdata1). i want to active innodb_file_per_table and have him retroactively split the DB into several files, whats the best way to do this?

[MS SQL Server] SQL Server TDE Encryption

[MS SQL Server] SQL Server TDE Encryption


SQL Server TDE Encryption

Posted: 09 Jun 2013 11:12 PM PDT

Hi All,I have a PRODDB which is encrypted with TDE .I have DMK and DBCert Key with me .I also have the password for the keys .Can you please help and suggest the Steps I should use to change the password for Keys ??

SQL 2008 R2 High Memory Usage

Posted: 09 Jun 2013 08:08 PM PDT

Dears, is it normal when sqlservr.exe takes that much of memory usage ?The installed is Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64) over Windows 2008 R2 SP1 Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)Please check the attached screen shot and let me know if further information is needed.Thanks all and really I appreciate your fast response as this is a production DB.

sp_executesql -- can produce very wrong execution plan

Posted: 09 Jun 2013 11:56 AM PDT

A front end app sends a query to SQL 2008 using sp_executesql.The query returns a count as the final result.The query hits some large tables (2m - 3m rows)The query is very poorly constructed (one of those SQL generators) consisting of many joins, cross DB joins (same server), sub queries, correlated sub queries etc etc.Five parameters are passed via sp_executesql, with one of these values the most likely to change and affect the size of the subset of data.The query normally takes a couple of seconds to run with the average number of rows in the first part of the query plan being 3000 - 5000.Every now and then the query takes 15 - 20 minutes.The only change is the one parameter (that mainly affects the subset of data)In the cases we know of this change takes the subset to 50,000 - 60000 records when the slow performance happens.Even when the same query is run via sp_executesql in SSMS it normally only takes seconds.So having said that..I got some info the other day when the query was slow and the execution plan it used was [b]WAY [/b]different.It was not a case of the same plan that was not optimal for the slightly larger record set but a totally different plan.If I run SP_UPDATESTATS it fixes the issue -- I assume it forces a plan recompile so only indirectly fixes the issue??Comparing the same statement using sp_executesql , one good and one bad I noticed the following from the Plan XML:Remember exactly the same query run in exactly the same way except one must have had a new plan created via the update stats.good: (1 second)EstimateCPU="0.000756482" ActualRows="3608" -- from the first filter step in the plan<ColumnReference Column="@P2" ParameterCompiledValue="(3817)" ParameterRuntimeValue="(3817)" /> -- 3817 is the value passed inbad: (17 min) EstimateCPU="3.91782E-05" ActualRows="2277436" from the first filter step in the plan<ColumnReference Column="@P2" ParameterCompiledValue="(3969)" ParameterRuntimeValue="(3817)" /> -- 3817 is the value passed inSo I guess the difference in performance is for the same step (JOIN) in the plan the good one returns 3608 rows the other one returns 2.2 million rows.and it is SEEKing and doing Key Lookups with these 2.2 million rows.Is that enough info for somebody to help me understand why it does this and how to fix it?You can also see the bad query is re-using a previous plan because the ParameterCompiledValue and ParameterRuntimeValue values are different?I thought parameter sniffing with sp_executesql would use the same plan for each parameter which may cause a SEEK instead of a SCAN rather than a totally different plan?thanks

[Articles] No R2

[Articles] No R2


No R2

Posted: 09 Jun 2013 11:00 PM PDT

The next version of SQL Server was announced last week and Steve Jones is pleased with the name.

Search This Blog