Monday, April 22, 2013

[how to] Is this table in 3NF?

[how to] Is this table in 3NF?


Is this table in 3NF?

Posted: 22 Apr 2013 07:23 PM PDT

I have a table person with attributes:

id  name  city  state  country  

id is the primary key. Is this table in 3NF? If 1000 people live in same city it seems redundant to store the same data for state, city, and country.

Can I drop and recreate a system extended stored procedure?

Posted: 22 Apr 2013 05:29 PM PDT

I am having a problem with one of my servers where I cannot execute xp_cmdshell under certain conditions. The full problem can be found here problem running xp_cmdshell when Application Name set in connection string

I want to see if dropping and recreating it will fix it (on a test server first). I'm grabbing at straws here.

How to flatten results of a table with two related "many" tables?

Posted: 22 Apr 2013 03:39 PM PDT

I've reorganized some tables in my database to be more flexible but I'm not really sure how to write the SQL to extract meaningful data from them.

I have the following tables (somewhat abbreviated for a clearer example):

CREATE TABLE Loans(      Id int,      SchemaId int,      LoanNumber nvarchar(100)  );    CREATE TABLE SchemaFields(      Id int,      SchemaId int,      FieldName nvarchar(255)  );    CREATE TABLE LoanFields(      Id int,      LoanId int,      SchemaFieldId int,      FieldValue nvarchar(4000)  );  

With the following data:

INSERT INTO Loans (Id, SchemaId, LoanNumber) VALUES (1, 1, 'ABC123');    INSERT INTO SchemaFields (Id, SchemaId, FieldName) VALUES (1, 1, 'First Name');  INSERT INTO SchemaFields (Id, SchemaId, FieldName) VALUES (2, 1, 'Last Name');    INSERT INTO LoanFields (Id, LoanId, SchemaFieldId, FieldValue) VALUES (1, 1, 1, 'John');  INSERT INTO LoanFields (Id, LoanId, SchemaFieldId, FieldValue) VALUES (2, 1, 2, 'Doe');  

The objective is to get a query that is flat for a loan with all its fields. (In the real world there will likely be between 20-30 fields for the same schema, but we just have 2 in the example):

LoanNumber   First Name    Last Name  ----------   -----------   ----------  ABC123       John          Doe  

I cannot use a pivot that references the 'First Name' and 'Last Name' because I'll have no idea what will actually be there.

I have a SQL Fiddle here with schema already in place.

How can I get the desired result?

Is it possible to select internal InnoDB columns?

Posted: 22 Apr 2013 02:01 PM PDT

I have read that InnoDB uses a few fields internally for all rows, such as the hidden columns DB_TRX_ID and DB_ROW_ID. Is there any easy way to see these values, such as within a select statement?

SSIS Suddenly Requires DCOM Permissions

Posted: 22 Apr 2013 02:18 PM PDT

I have recently deployed some SSIS packages. I made a job to run them one after another. However, I am now allowed to use MSDTC. So I chained the jobs together like I have done before, but now I am faced with an error message about DCOM. All Internet research points to insufficient permissions on the MSDTC component, but I am worried this is going the wrong way.

Any idea how I can "reset" the need for MSDTC for this package?

How to override SSIS 2008 package config file path?

Posted: 22 Apr 2013 01:29 PM PDT

I have an SSIS 2008 package with a single configuration named Config. I would like to use the dtexec /SET switch to change the path of the config file that it uses so our developers that build the package can have it point to their local path, but when we deploy it to our production servers we can specify a new path for dtexec to load the config from so it can contain prod settings.

According to MSDN we should be able to do this using the /SET command as it mentions here:

Using the /Set option to change the location from which the utility loads design-time configurations Behavior in SQL Server Integration Services 2008

Succeeds.

After the utility applies the run-time options, the utility reloads the design-time configurations from the new location.

I've tried:

dtexec /FILE "PackagePath" /SET  "\Package.Configurations[Config].Properties[ConfigurationString]";"ConfigFilePath"  

and it fails with:

The package path referenced an object that cannot be found: "\Package.Configurations[Config].Properties[ConfigurationString]". This occurs when an attempt is made to resolve a package path to an object that cannot be found.

I've also pasted a redacted version of my .dtsx file here in case anyone thinks I'm just mistyping something. And here is the contents of config that I'm trying to change it to.

I realize there are other ways of doing indirect configs using environment variables, etc... but that is not what I'm interested in so please do not offer up those suggestions. I'm specifically trying to use the /SET command since Microsoft has explicitly stated that we should be able to. Due to the way our environment is setup this method would be the best for us.

Why would SequelPro only import 23k rows out of 130k?

Posted: 22 Apr 2013 02:18 PM PDT

I use SequelPro for mysql on a Mac OS X -- and I used the import function to upload a 130k .csv file to my database. Everything seems to work fine, then I get the message File Read Error: An error occurred when reading the file, as it could not be read using the encoding you selected (Auto-detect - Unicode (UTF-8)). Only 23,000 rows were imported.

When I hit "Ok," everything else seems to work relatively fine -- I'm just missing about 107,000 rows.

Any idea as to what it could be -- maybe if I use something other than auto-detect during the import? I thought that it might have been some extra commas floating around in the actual .csv file, which their were, but I got rid of those and the same thing happened.

Thanks!

--

Edit: I also don't know if this is an option for everybody, but since the problem stems from exporting from a Windows machine to a Mac then opening it in Excel for Mac -- the problem seems to go away if I just used OpenOffice instead.

PL/SQL SUM - ORA-00937: not a single-group group function

Posted: 22 Apr 2013 01:48 PM PDT

For every journey want to sum them to work out the distance between.

SELECT Journey.Journey_No, Stages.Stage_ID, SUM(Stages.Distance_Between)  FROM Journey, Journey_Stages, Stages  WHERE Journey.Journey_No=Journey_Stages.Journey_No  AND Journey_Stages.Stage_ID=Stages.Stage_ID;  

Tables are as followed:

CREATE TABLE Journey_Stages(Journey_No integer REFERENCES  Journey(Journey_No),Stage_ID integer REFERENCES Stages(Stage_ID));    CREATE TABLE Stages(Stage_ID integer PRIMARY KEY, Start_Station  integer REFERENCES Stations(Station_ID), End_Station integer  REFERENCES Stations(Station_ID));    CREATE TABLE Journey(Journey_No integer PRIMARY KEY, Train_No integer  REFERENCES Train(Train_No));  

However I am getting the following error:

ORA-00937: not a single-group group function  00937. 00000 -  "not a single-group group function"  *Cause:      *Action: Error at Line: 273 Column: 26  

I am certain this is something simple to resolve.

What is the future for system and database administration [closed]

Posted: 22 Apr 2013 10:54 AM PDT

Let me present myself for better understanding of my challenge. I love coding, it's been 5 years that I've been in a company as software developer. But for the past 3 years I've been quite pushed into deployment and reporting (development and configuration scheduling etc) because I have developed some love for linux system (ubuntu) that I currently use myself as main operating system. I've currently inherited management of 4 online dedicated linux servers and I learn as I go since I've not considered it as main job.

This morning I was asked whether I would like to be in engineering operations department that is going to be created and as much as I love coding and these deployments and database tweaks.

I'm of course interested but then I would not want to shoot myself on the foot as I don't have enough information about the field, if it's a dying field as computer are getting more and more smarter.

I would like those on the field to throw in insight.

I however have more direct questions:

  1. why would someone choose to become a system and database administrator ?
  2. what does it take to be a good system and dba administrator?
  3. what is hot as of 2013 in the field?
  4. what are resources to be an actual system and dba adminitrator?

thanks

Why some of mysql system variables are readonly

Posted: 22 Apr 2013 11:18 AM PDT

There are almost more or less 277 mysql server system variables . In these variables some of the variables are dynamic and few are not.

I don't under stand what is the reason behind read only variables . why few of mysql system variables are read only. what will happen if they make those variable dynamic.

Is there any deep reason for the read only variables?

For now we can consider :

  • innodb_buffer_pool_size
  • innodb_checksums
  • datetime_format
  • log-bin

and there are many other variable we can find at http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html

Of course, I know we can't change variable like version. But many other variables like log-bin are not dynamic. For me, it does not make sense if general_log and log_slow_queries are dynamic. Why is log-bin not dynamic? like wise there are many other variables.

Does SQL Server place shared locks on scanned records when using REPEATABLE READ [duplicate]

Posted: 22 Apr 2013 10:53 AM PDT

This question already has an answer here:

Assume a SQLCmd session which is using transaction isolation level REPEATABLE READ.

In this session I start a transaction and execute an UPDATE statement with a WHERE clause on a non indexed column. This statement should evaluate WHERE clause for each record in the table, but only one will match.

If I examine the locks placed under this transaction after running the UPDATE statement I can only see two IX locks on Table and Page, and a X lock on the row which was updated.

My question is: shouldn't the database engine place shared locks on all the rows it read to ensure REPEATABLE READ? What if some other transaction updates a record so that it would match with my WHERE clause in the UPDATE statement, thereby violating the REPEATABLE READ.

If I execute a SELECT *, then I can see it placing S locks on each row, which are not already locked with X.

Can anyone help me understand this situation?

I've tried with both SQL Server 2008 R2 and 2012, same behavior in both.

Does SQL Server place shared locks on scanned records when using REPEATABLE READ

Posted: 22 Apr 2013 10:52 AM PDT

Assume a SQLCmd session which is using transaction isolation level REPEATABLE READ.

In this session I start a transaction and execute an UPDATE statement with a WHERE clause on a non indexed column. This statement should evaluate WHERE clause for each record in the table, but only one will match.

If I examine the locks placed under this transaction after running the UPDATE statement I can only see two IX locks on Table and Page, and a X lock on the row which was updated.

My question is: shouldn't the database engine place shared locks on all the rows it read to ensure REPEATABLE READ? What if some other transaction updates a record so that it would match with my WHERE clause in the UPDATE statement, thereby violating the REPEATABLE READ.

If I execute a SELECT *, then I can see it placing S locks on each row, which are not already locked with X.

Can anyone help me understand this situation?

I've tried with both SQL Server 2008 R2 and 2012, same behavior in both.

Header and line item data source mismatch

Posted: 22 Apr 2013 10:35 AM PDT

I'm working with a star schema for a data warehouse and I am running into a problem with header and line items from different data sources.

CREATE TABLE DataSourceAHeader  (       OrderId INT NOT NULL      ,TotalCost MONEY NOT NULL      -- Date, etc...  );    CREATE TABLE DataSourceALine  (       OrderId INT NOT NULL      ,LineNumber INT NOT NULL      -- Dates, etc...  );    CREATE TABLE DataSourceBLine  (       OrderId INT NOT NULL      ,Cost MONEY NOT NULL      ,LineNumber INT NOT NULL  );  

I have data sources A and B which represent the same data in different ways. Data source A contains headers and line items, but it only has the net outcome (Total Cost) in the header. Data source B contains only line items and each item has an outcome (Cost).

I could keep two fact tables (one for the header and one for the line items), but I have researched and it seems inadvisable. Is there a way to deal with this kind of mismatched format?

My current strategy:

CREATE TABLE Fact.Order  (       Id BIGINT IDENTITY PRIMARY KEY      ,OrderId INT NOT NULL      ,Cost MONEY NOT NULL      -- Date key, etc...  );    CREATE TABLE Fact.OrderLine  (       Id BIGINT IDENTITY PRIMARY KEY      ,OrderFactId BIGINT NOT NULL REFERENCES Fact.Order (Id)      ,LineNumber INT NOT NULL      -- related line stuff  );  

DataSourceAHeader and DataSourceBLine are inserted into Order and OrderLine. DataSourceBLine is split one line per row.

Here is an example for a DataSourceAHeader and DataSourceALine

SELECT * FROM Fact.Order;  |------------------------------------|  |   Id   |   OrderId   |   Cost      |  |   1    |     1100    |   12000.00  |  |   2    |     1101    |   10000.00  |  |------------------------------------|    SELECT * FROM Fact.OrderLine;  |-------------------------------------------|  |   Id   |   OrderFactId   |   LineNumber   |  |   1    |        1        |       1        |  |   2    |        1        |       2        |  |   3    |        1        |       3        |  |   4    |        2        |       1        |  |   5    |        2        |       2        |  |   6    |        2        |       3        |  |-------------------------------------------|  

Here is an example for a DataSourceBLine

SELECT * FROM Fact.Order;  |---------------------------------|  |   Id   |   OrderId   |   Cost   |  |   1    |     1000    |   12.00  |  |   2    |     1000    |   10.00  |  |---------------------------------|    SELECT * FROM Fact.OrderLine;  |-------------------------------------------|  |   Id   |   OrderFactId   |   LineNumber   |  |   1    |        1        |       1        |  |   2    |        2        |       2        |  |-------------------------------------------|  

Restoring a database backup to a local machine in MySQL

Posted: 22 Apr 2013 11:50 AM PDT

I have downloaded my .sql file of my database server. It is in the D: drive of my Windows machine.

I want to restore the back up in my this machine. I used:

mysql database -u root < backupfile.sql  

Where database is my new database name in this machine. My confusion is that my backupfile.sql is in the D: drive and thus I guess there is error.

ERROR 1064 <42000>:You have an error in your SQL syntax;Check the manual that corresponds to your MySQL server version for the right syntax to use near 'mysql database -u root < backupfile.sql' at line 1.  

Simple MySQL query randomly takes forever

Posted: 22 Apr 2013 02:08 PM PDT

I'm having an issue on my server with a simple PHP send emails script that runs every minute via a cronjob. The script contains a MySQL query that runs very fast most of the time, but randomly will take "forever".

It feels like a similar problem to this: http://stackoverflow.com/questions/976739/mysql-query-randomly-takes-forever

At seemingly random times while the MySQL query is running the load on my server spikes from an average of 0.5 up to 8 and my entire server slowly grinds to a near halt. The query does eventually complete but sometimes it runs for over 10 minutes -- on a "normal" occasion the query completes in 0.2 seconds.

This is the query:

SELECT * FROM email_messages   WHERE time_created <= "2013-04-22 10:40:00" AND is_sent = 0 AND is_cancelled = 0   LIMIT 6  

time_created is (should be) indexed - its purpose is to create a 10 minute delay before sending so emails can be cancelled if necessary.

I've isolated it to this query because my server trucks along nicely without the load average spikes when this cronjob is turned off.

My server is a VPS with 2GB of RAM. The database isn't huge (750mb in total) but the email_messages table being selected from is by far the largest at around 400mb.

Through research, I thought it was a MySQL query cache problem because query cache was turned on. But turning off query cache has not resolved the issue.

I turned off query_cache by setting query_cache_size = 0.

Any ideas at all as to what could be going on here?

[CONCLUSION]

I did an experiment by including a condition in the query using my primary key so that only the last 1000 table entries where being search through.

... AND id > 131000 ...  

Running the cronjob with this condition resolved the issue, and further investigation in the mysql console with EXPLAIN revealed that the index on time_created was not created as I had expected.

I now need to read up on how to use HeidiSQL to properly create indexes because HeidiSQL is telling me there is an index key on time_created but EXPLAIN says otherwise. This may be a trap for young players!

Backing up file groups with restore to earlier backup SQL Server 2008

Posted: 22 Apr 2013 04:45 PM PDT

I have a task: I have created a database with a filegroup lamb that has a few tables

I need to extract a csv into a table in a file group named lamb and backup file group lamb

Then extract another csv file again and back up again

But I need to be able to restore the first back up

What is the backup and restore method ????

Is it possible to only restore the lamb file group ???

problem running xp_cmdshell when Application Name set in connection string

Posted: 22 Apr 2013 02:50 PM PDT

I have an app on 10 or so servers that runs a few xp_cmdshell statements against MSSQL 2008. It works fine on all the servers except for one. To make things worse, I can run all the commands in SQL Management Studio, but in the app, they don't work. I even made a temporary app to test and it works fine! But in the deployed app, I get a simple SQL error "Access Denied". I have narrowed it down to the connection string, if I include the application name

Data Source={0};Initial Catalog={1};Persist Security Info=True;User ID={2};Password={3};Application Name=TheGroovyApp  

It throws access denied only when calling xp_cmdshell, normal SQL statements works fine. But if I remove the application name

Data Source={0};Initial Catalog={1};Persist Security Info=True;User ID={2};Password={3}  

It works fine for both normal SQL statements and calls to xp_cmdshell. The strange thing is, its only happening on one of the ten servers. The only difference being that the server has SP1 and the others do not.

@@VERSION returns

Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64)   Apr  2 2010 15:48:46   Copyright (c) Microsoft Corporation  Standard Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (VM)   

I am thinking that there is some sort of authentication that can be granted to applications, but I cant seem to find anything. I can replicate it in SQL Managment Studio by adding

Application Name=TheGroovyApp  

to the Additional Connection Parameters tab on the Connect to Database Engine dialog when you create a new query or change its connection.

The simple test statement I use is

EXEC master..xp_cmdshell 'DIR F:\SomeDirectory'  

If anyone could shed some light on what is happening it would be much appreciated.

EDIT:

OK after a bit more investigation its even more confusing.

If I set the Application Name to the following which is the default for .Net connections, it works fine.

Application Name=".Net SqlClient Data Provider"  

I can run xp_subdirs without any problems no matter which settings I use

EXEC master..xp_subdirs 'F:\SomeDirectory'   

Now this is where it gets really weird. The first two fail, but the last one succeeds with the application name being set to my application name. But only if its xp_cmdshell that get called, xp_subdirs works with all three.

With Application Name in connection set

EXEC master..xp_cmdshell 'DIR F:\SomeDirectory' - Fails  master..xp_cmdshell 'DIR F:\SomeDirectory' - Fails  xp_cmdshell 'DIR F:\SomeDirectory' - Works  EXEC master..xp_subdirs 'F:\SomeDirectory' - Works  master..xp_subdirs 'F:\SomeDirectory' - Works  xp_subdirs 'F:\SomeDirectory' - Works  

With Application Name not set in connection

EXEC master..xp_cmdshell 'DIR F:\SomeDirectory' - Works  master..xp_cmdshell 'DIR F:\SomeDirectory' - Works  xp_cmdshell 'DIR F:\SomeDirectory' - Works  EXEC master..xp_subdirs 'F:\SomeDirectory' - Works  master..xp_subdirs 'F:\SomeDirectory' - Works  xp_subdirs 'F:\SomeDirectory' - Works  

The error returned in the query messages area in SQLMS when it fails

Msg 10011, Level 16, State 1, Line 1  Access denied.  

This error message is only happening on the one server, I cannot replicate this on any others.

What could cause the wrong ID to be inserted?

Posted: 22 Apr 2013 01:14 PM PDT

I have a SQL Server 2008 server (build 10.0.5500). Earlier this week I ran this on a table that already had data in it:

delete from dbo.table  go      dbcc checkident('dbo.table',reseed,0)  

When the user went to create a new record later on, somehow an ID of 0 was inserted into the ID column, instead of the 1 SQL Server usually puts in if identity(1,1) is configured for the ID.

This caused some weird issues, but clearing the data and running the reseed resulted in a 1 being inserted, as expected. I can't duplicate the issue.

For reference, here's the general format for our save sp's:

alter procedure dbo._TableSave      @pk_id int,      @field varchar(50)  as      if (@pk_id is null)      begin          set nocount on;            insert into dbo.Table          (              Field          )          values          (              @field          );          select scope_identity();      end      else      begin          update dbo.Table          set Field=@field          where PK_ID=@pk_id            select @pk_id      end  

Does anyone know what could cause SQL Server to insert a 0 in the ID when it should have been a 1?

How to migrate data from SQL database to MySQL including flat files?

Posted: 22 Apr 2013 10:03 AM PDT

one of my friend has asked some asp developer to make online application where user can upload files also. jpg, doc, pdf and all these files are saved in SQL database as flat file. Now my friend asked me to do everything in php and he want to migrate all the data which he has in SQL datbase to MySQL database. I can do it. text can be migrated easily but the files which are saved in SQL database can be migrated to MySQL as flat files. is it possible?

SQL Server replication conflicts after migration from 2000 to 2008

Posted: 22 Apr 2013 01:03 PM PDT

I got a suggestion over at Stackoverflow to post here....greatful for any and all help.

Please bear with me I think this might take a while to explain. For many years now my company has hosted a solution for a client involving a complex web application with smaller mobile solution consisting of IIS 6 for the web app, SQL Server 2000 on its own server and Visual Studio 2005 Pocket PC app replicating with SQL Server via Merge Replication. This whole time the mobile solution has been very solid and did not require many updates so we have replicated with sscesa20.dll the entire time.

We recently migrated this entire solution as follow:

  • Web Box - New Win Server 2008 R2 running IIS 7.5
  • SQL Server Box - New Win Server 2008 R2 running SQL Server 2008
  • Mobile app - small updates converted to Visual Studio 2008 and Windows for Mobile 5

The new web box received the 64 bit version of SQL Server Compact 3.5 tools and we now call sqlcesa35.dll from the mobile device to perform merge replication.

The basic idea of the entire process is that mobile field users get assigned safety inspections to perform on buildings. When a facility in the system needs an inspection an inspection record is created via the web app in the DB. A status flag is set such that the HOST_NAME() is utilized to make sure only records for a given inspector with this particular status will let them show up on their mobile device. The user can synch multiple times in the field sending their data up to the SQL Server/web app and receive more inspections down or other updates such as look up table data...typical merge replication here and has been working great for years. Once the field user changes the status of the inspection, it will travel from mobile device to SQL Server database and be removed from their iPaq. The inspection has additional work flow on the web app from here on out.

Now on to the problem. We migrated everything publishing the exact same subset of tables with the same joins/filters. Same settings on the publication as far as I can tell are the same. However; when a user gets a new inspection down to the hand held for the very first time, enters data, then synchronizes back to the database every row has a conflict. Since we have default conflict resolution the publisher wins and the data collected in the field it lost. The inspection now looks blank just as it did when it first came down to the mobile device. If the user syncs again with or without changes on the mobile (subscriber) all is well. Any future changes from the mobile device are intact.

It is as if the web/db data is newer then the hand held data. I am 100% sure it is not. I have looked at table triggers, web app logic, etc. We were very careful not to include any application changes to DB/web app/mobile app with respect to data manipulation during this migration.

Here is a summary of the order of operation:

New row created in the database >> Mobile user receives data >> mobile user updates data >> synchronizes - data is lost. Conflicts show up for all data lost.

From here on out any additional mobile changes are captured. Merge replication works in both directions flawlessly.

Thanks for taking the time to read please help. I am stuck after 3 days.

Tools and methodologies to keep to DBs aligned

Posted: 22 Apr 2013 02:03 PM PDT

2 DBs having schemas that represent the same semantic objects. The first one is production DB (Non-RDBMS, in-house implemented in-memory DB with shitload of RAM). Other is Postgres.

Once in a while the production DB is changed (schema upgrade).

Question: what is the best practice to keep DBs of different types aligned semantically?

How to connect to a Database made by Oracle SQL Database?

Posted: 22 Apr 2013 11:03 AM PDT

So I am fairly new at this, so if you could keep that in mind in your answers, it would be much appreciated. I installed Oracle SQL Database on my Windows PC. It came in two zip files. I installed it and the online portion of it works fine. I can login with Username: sys and Password: **. What I am trying to do is connect to this newly created database on another computer through SQL Developer. I have read that in order to do this, you need to change the hostname of the Database from "localhost" to an IP Address. How do I do that and is there anything else I need to do to make this work?

I also found this LINK. Is this something I should do? I do not have a Domain though.

listener.ora

# listener.ora Network Configuration File:    C:\app\hyderz\product\11.2.0\dbhome_2\network\admin\listener.ora  # Generated by Oracle configuration tools.    SID_LIST_LISTENER =  (SID_LIST =   (SID_DESC =     (SID_NAME = CLRExtProc)     (ORACLE_HOME = C:\app\hyderz\product\11.2.0\dbhome_2)     (PROGRAM = extproc)     (ENVS = "EXTPROC_DLLS=ONLY:C:\app\hyderz\product\11.2.0\dbhome_2\bin\oraclr11.dll")   )  )    LISTENER =  (DESCRIPTION_LIST =    (DESCRIPTION =      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1522))      (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1522))    )  )    ADR_BASE_LISTENER = C:\app\hyderz  

tnsnames.ora

# tnsnames.ora Network Configuration File:   C:\app\hyderz\product\11.2.0\dbhome_2\network\admin\tnsnames.ora  # Generated by Oracle configuration tools.    LISTENER_ORCL =    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1522))      ORACLR_CONNECTION_DATA =  (DESCRIPTION =  (ADDRESS_LIST =    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1522))  )  (CONNECT_DATA =    (SID = CLRExtProc)    (PRESENTATION = RO)  )  )    ORCL =  (DESCRIPTION =  (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1522))  (CONNECT_DATA =    (SERVER = DEDICATED)    (SERVICE_NAME = orcl)  )  )  

replicate-wild-ignore-table does not stop already logged queries on slave

Posted: 22 Apr 2013 12:03 PM PDT

We have a master DB where where multiple queries where fired on a specific table, these queries were taking too long to execute on slave and it was lagging behind and as a result of which queries were queued and executing one by one.

Later we identified that this table was not required to be replicated and we added this table to replicate-wild-ignore-table on the slave's my.cnf and restarted the slave. But the slave again picked up the remaining queries for that table. Ideally it should have skipped the queries for that table. We are not able to figure out why this is happening.

Can somebody explain?

How do I display subtotals in cube browser in SSAS 2012 (Sql Server Data Tools)?

Posted: 22 Apr 2013 05:03 PM PDT

Apparently this feature was available in previous versions but has now been removed.

What are some alternatives?

Bulk insert into SQL Server from VMWare guest using distributed switch

Posted: 22 Apr 2013 09:57 AM PDT

This is mostly likely not a SQL server issue but the setup seems to only be affecting BULK INSERTS to SQL Servers.

We have recently moved VM Hardware and all the guests that were moved had their virtual switches changed from standard to distributed.

I then started receiving

A fatal error occurred while reading the input stream from the network. The session will be terminated (input error: 64, output error: 0)

on a two SQL servers during BULK INSERT operations. One of the SQL servers was a VM with the new configuration and the other was a physical server. Both BULK INSERT operation originated from a VM with the new configuration. The BULK INSERTs would not fail every time, it was very random when it would.

When we changed the virtual switch to be a standard switch instead of a distributed switch the issue goes away.

I am looking for more of an explanation to why it doesn't work with a distributed switch instead of a resolution. My guess would be that the BULK INSERT operation is serial and with a distributed switch the packets are being routed through different hosts, some of which may be busier than others, and are arriving at the destination server beyond some latency threshold. (note: there is nothing in the windows event log at the times of the errors on either the source or destination server)

Merge Replication SQL

Posted: 22 Apr 2013 04:03 PM PDT

I am currently having a problem with the following Creating a merge replication is simple if the servers are on the same network, i would like to know how can i do a merge replication if the servers are on seperate networks I am using SQL 2008 R2

Migrating MySQL to new server

Posted: 22 Apr 2013 06:03 PM PDT

We have a MySQL 5.1 instance running on Windows Server 2008 R2 x64 (Server A) replicated to another MySQL 5.1 instance on Windows Server 2003 R2 (Server B).

We have a new MySQL 5.5 instance on another Windows Server 2008 R2 x64 machine (Server C).

I want to replace Server A with Server C.

My plan is to set up Server C as another slave to Server A by 'cloning' Server B like this:

http://www.redips.net/mysql/add-new-slave/.

After that I plan on doing this:

http://blesseddlo.wordpress.com/2011/06/03/convert-mysql-slave-to-master/.

After that I want to change Server B's master to Server C.

Is this a viable approach? Are there caveats? May there be any compatibility issues when setting a MySQL 5.5 instance as slave to a MySQL 5.1 instance?

I can't just stop everything and use mysqldump since there can be no downtime and we have ~25 GB of data that must be transferred over a Gigabit LAN.

MySQL: Lock wait timeout exceeded

Posted: 22 Apr 2013 03:03 PM PDT

I have a developer that has been trying to alter a large table (~60M rows). Via LiquidBase/JDBC they're running

ALTER TABLE foo DROP FOREIGN KEY fk_foo_1;  

Today while it was running I checked in on it periodically; everything looked normal, the query was running, in state "copying to tmp table", I could see the temp table on disk getting larger and larger (still plenty of free space). I could see other processes blocked waiting for this table to be unlocked. Finally after about 4.5 hours, they got the "lock wait timeout exceeded; try restarting transaction" error. This is actually the 2nd time they've tried, and it seems to fail right about when I would expect it to complete.

innodb_lock_wait_timeout is set to the default 50, I can't imagine it would run for so long to be affected by this. No errors logged, no deadlocks or other weirdness seen in 'show engine innodb status'. Can anyone help me with other ideas? I'm fairly stumped on this one.

thanks

MI Data Warehouse Advice

Posted: 22 Apr 2013 07:03 PM PDT

I have recently started a new job and part of my remit is to try to rescue the Management Information (MI) Data Warehouse. I use the term Data Warehouse very loosely here!

The server setup is:

  • Windows Server 2008
  • MS SQL Server 2008 R2 Standard Edition

The disks split in to 3 drives:

  • C: (OS) & System Databases

  • D: Log Files (Approx 120GB Free)

  • E: Data Files (Approx 1.3TB Free)

These are the observations I have made regarding the database:

  • The schema is exactly the same as the equivalent production database schema. So rather than extract & transform the data in to a Data Warehouse style schema it is basically a copy of production that is used for reporting.
  • The tables do not have any indexes, primary keys or foreign keys.
  • Most information required for reporting is located in 5-10 core tables. These tables range from 10-15 million rows to 1.6 billion rows.
  • There are no stored procedures or views. All reports use embedded SQL (most of these reports are Excel docs.)
  • I have found some tables that contain duplicate rows, that shouldn't be there.
  • There are hundreds of logins with access that no one seems to know who they are or what they need them for.

Importing data

The data is imported using batch files and OSQL. It is slow, clunky and prone to failure (It has failed 4 times and I have only been there for 2 and half weeks) The logging is also poor.

So apart from all that, it is perfect...

I need to find a way to fight my way out of this mess but I am not sure how to go about it.

Ideally, I want to be able to:

  • migrate the import process to use staging tables and SSIS
  • modify the existing schema to include referential integrity
  • create indexes to improve performance
  • create the reports in SSRS instead of random Excel docs dotted around
  • lock down the access (or at least perform some sort of audit)

The main issue at the moment is the performance. I have created a new filegroup on drive D: (where the log files are stored) and placed a few non clustered indexes on there. I am being slightly cautious as I don't want to increase the import times as these are taking too long as it is!

I wanted to partition the larger tables but partitioning is not included in Standard, it is an Enterprise feature.

I realise that this is a pretty huge task and I am not looking for a magic fix but a little guidance on how to attack this would be a great help.

EDIT:

I should also point out that there is no test or dev environment for this either...

No comments:

Post a Comment

Search This Blog