Wednesday, February 27, 2013

[how to] Can connect to the ms sql server through tsql but not from the bsqldb

[how to] Can connect to the ms sql server through tsql but not from the bsqldb


Can connect to the ms sql server through tsql but not from the bsqldb

Posted: 27 Feb 2013 08:25 PM PST

After installing freeTds I was able to connect to the ms sql using tsql but not yet through bsqldb. With bsqldb it's trying to connect to the master even though the db was specified to testDB

serverName is setup in /etc/freetds.conf [serverName] host = database.windows.net port = 1433 tds version = 8.0

Using tsql [gpadmin@mdw ~]$ /usr/bin/tsql -S serverName -U user -D testDB -P password

works fine

With bsqldb it's trying to connect to the master even though the db was specified to testDB [gpadmin@mdw ~]$ /usr/bin/bsqldb -U user -P password -S serverName -D testDB -i tag.sql -o tag.csv

Msg 4060, Level 11, State 1 Server 'fj5j2jtt5k', Line 1 Cannot open database "master" requested by the login. The login failed. bsqldb: error: severity 11 > 10, exiting

SSIS 2008 Package Configurations being ignored

Posted: 27 Feb 2013 08:13 PM PST

With the change to package configurations in 2008 compared to 2005 when I specify /ConfigFile something.dtsConfig on the command line, variables defined in the package are keeping their design-time values instead of using the settings from the config file.

I'm not quite sure I understand HOW to get the external config file to be used at all. I've read articles that say that only design-time configurations that are set will overwrite the load of the external file. Does this mean I can change the variables to blank strings and then they will get overwritten? I can't delete the variable completely! What about integers?

I've seen articles that mention turning OFF using package configurations in the package.

I can use the SSIS Package Editor or an XML editor to change the configuration file path in the package, and then it will use that file's settings "last" (regardless of the external /ConfigFile option), but I don't want to be changing the package. I want one package with Test.dtsConfig and Production.dtsConfig and be able to swap back and forth without changing the package.

What is the recommended way to do this now?

Is there a term for a table that has an "effective date" column?

Posted: 27 Feb 2013 06:20 PM PST

I work with a system which uses a lot of tables whose primary keys include

  1. a foreign key to another table and
  2. a column indicating the date that record became/becomes "active".

Example tables:

employee                    (primary key is employee_id)      employee_id    INT      hire_date      DATE    employee_salary             (primary key is employee_id, start_date)      employee_id    INT      (foreign keyed to employee.employee_id)      start_date     DATE      salary         TINYINT  

When viewing an employee in our application UI, we would display all the information from employee and whichever employee_salary record had a start date most recent before the date we are interested in.

In my working group, we would refer to employee as a "base" table and employee_salary as a "repeating" table, since we consider the salary to merely be an aspect of the "base" entity, the employee. We also say that the employee_salary table uses an "effective date".

But these don't seem to be terms in widespread use. Are there standard terms for

  1. tables that represent an entity, and
  2. tables that represent a changing aspect of one of those entities?

how to remove many to one relation

Posted: 27 Feb 2013 04:10 PM PST

I'm currently designing the OLAP cube for my bachelor year major project. It is some how like as given below on the figure.

enter image description here Here call_date includes date of the call and call_time is of 86400 column include second of each day. call_date has one to many relationship with call_time. (As there occurs many call in a day). My professor said me to me that it is best design OLAP which give answer to all dimension question. like What is the duration of call on day1? My question is : Is it necessary to remove many-to-one relation on OLAP cube? If so than what is the next step I have to perform in order to do so.

I'm not the best in sharing the problem so if any problem occurs in understanding the problem then please comment below.

mysql myisam, view dropped, now shows as table "in use", can't drop

Posted: 27 Feb 2013 07:43 PM PST

i have a view, then the underlying table had a column dropped, and made that view invalid.

i tried dropping the view so i could recreate it, then mysql said it was a table (not a view) and marked it "in use".

now i can't drop nor repair that table because it says it is not a table.

i've googled for hours trying find a way to fix this :(

i do have root access on the mysql server running ubuntu. i stopped mysql and tried myisamchk, but no luck.

i can see a .frm file for the view in the db data folder

do i need to delete that file to make the corrupt item go away, or do i somehow delete the table from the db schema?

i have a db backup from last night, but would rather fix this 1 issue than restore from backup and lose today's changes. thanks

MYSQL: Is it okay to have different lower_case_table_names value on master and slave?

Posted: 27 Feb 2013 03:45 PM PST

I have a OSX mysql master with lower_case_table_names set to 2. I have just setup a linux mysql slave with lower_case_table_names=1. I haven't started replication on slave, but everything looks good at this point.

Will it cause any issues with replication?

Thank you

Domain group hierarchical group membership and SQL Logins mapped to AD groups

Posted: 27 Feb 2013 03:42 PM PST

SQL Version: Microsoft SQL Server 2005 - 9.00.4035.00 (X64) Nov 24 2008 16:17:31 Copyright (c) 1988-2005 Microsoft Corporation Standard Edition (64-bit) on Windows NT 6.1 (Build 7601: Service Pack 1)

This is the AD group structure:

  • Group A
    • User 4
    • Group B
      • User 1
      • User 2
      • User 3

I granted Group A login access to my SQL instance (connect and dataReader/writer to Northwind db).

I'd expect to be able to login to sql with User 1-4 but in fact only User 4 is allowed to login. If i grant Group B access then all users can login as expected.

So I'd venture to answer my own question and say "NO - sql only respects the 1st-level members of an AD sec group" but I wanted confirmation from someone else.

Logs show state 11 login failure which indicates "Valid login but server access failure."

MySQL InnoDB table index increases greatly

Posted: 27 Feb 2013 08:36 PM PST

We have an active and important InnoDB table with about 8M rows. We just noticed that for several months, the primary key value (a BIGINT) has been jumping by hundreds at a time.

I turned on query logging while watching for a jump and found no deletions from the table during the time. (We don't often delete historical data, anyway.)

MySQL version: mysql Ver 14.14 Distrib 5.1.66, for redhat-linux-gnu (i386) using readline 5.1

OS: Centos 6.3

Any ideas how or why this could happen?
Any other information you might need to assess the situation?

Thanks!

Dumping and restoring on db server - postgres

Posted: 27 Feb 2013 06:24 PM PST

Is there a way to execute a backup and restore of PostgreSQL DB from App server without calling pg_dump from App server and executing pg_restore also on App server. In this way I'm executing operations from App server on DB server and making unnecessary network traffic.

I would like to execute dumping and restoring directly on DB server, but to initiate it from my application.

My current situation if that I make extra network traffic, wait more time and also I need to install PostgreSQL client on my app server.

Any hints?

Best practices for history/temporal tables?

Posted: 27 Feb 2013 07:41 PM PST

Suppose I have an object, with certain fields which I want to track history, and certain fields which I do not want to track history. From a normalization perspective, is the following schema alright:

CREATE TABLE MyObject AS (      MyObjectId INT IDENTITY NOT NULL PRIMARY KEY,      MyObjectField1 VARCHAR(100) NOT NULL,      MyObjectField2 VARCHAR(100) NOT NULL,      MyObjectField3 VARCHAR(100) NOT NULL,      MyObjectTrackedField1 VARCHAR(100) NOT NULL,      MyObjectTrackedField2 VARCHAR(100) NOT NULL,      MyObjectTrackedField3 VARCHAR(100) NOT NULL,  )  CREATE TABLE MyObjectHistory AS (      MyObjectHistoryId INT IDENTITY NOT NULL PRIMARY KEY,      MyObjectId INT NOT NULL FOREIGN KEY REFERENCES MyObject(MyObjectId),      MyObjectTrackedField1 VARCHAR(100) NOT NULL,      MyObjectTrackedField2 VARCHAR(100) NOT NULL,      MyObjectTrackedField3 VARCHAR(100) NOT NULL,  )  

where MyObjectHistory contains the tracked fields for all except for the latest revision. Or, should all tracked fields be in one table, and all revisions including the latest be in that table, as in:

CREATE TABLE MyObject AS (      MyObjectId INT IDENTITY NOT NULL PRIMARY KEY,      MyObjectField1 VARCHAR(100) NOT NULL,      MyObjectField2 VARCHAR(100) NOT NULL,      MyObjectField3 VARCHAR(100) NOT NULL,  )  CREATE TABLE MyObjectHistory AS (      MyObjectHistoryId INT IDENTITY NOT NULL PRIMARY KEY,      MyObjectId INT NOT NULL FOREIGN KEY REFERENCES MyObject(MyObjectId),      MyObjectTrackedField1 VARCHAR(100) NOT NULL,      MyObjectTrackedField2 VARCHAR(100) NOT NULL,      MyObjectTrackedField3 VARCHAR(100) NOT NULL,  )  

Migrating database from OSX to Linux

Posted: 27 Feb 2013 06:12 PM PST

I followed the setup in below link, to setup mysql. Basically, I am have a master on OS X server and want to setup a linux slave.

Create a mysql slave database from master database

After I copied the data from macosx to linux, all the tables are found correctly except innodb. The database name was databasePE.

When I start database, I get error that databasepe/innodb_table_name.ibd is not found. I created a symlink from databasepe to databasePE.

If I use databasepe, then innodb tables would work, but myisam tables won't work.

ERROR 1146 (42S02): Table 'databasePE.myisam_table' doesn't exist  

If I use databasePE, then innodb tables won't work, but myisam tables would work.

ERROR 1146 (42S02): Table 'crashtracerPE.innodb_table' doesn't exist  

OS X use case insensitive file names, and this was never the problem on OS X. But, as linux uses case sensitive file names, it could not find the tables.

I tried setting lower_case_file_system=ON, but mysqld threw error.

130227 13:52:59 [ERROR] /usr/sbin/mysqld: unknown variable 'lower_case_file_system=ON'  

I tried setting lower_case_table_names=2, but mysqld threw below error and reverting it back to zero.

130227 14:05:11 [Warning] lower_case_table_names was set to 2, even though your the file system '/mnt/data2/maindb/db/' is case sensitive.  Now setting lower_case_table_names to 0 to avoid future problems.  

Finally, after renaming databasePE to databasepe and setting lower_case_table_names=1, myisam and innodb tables are working correctly.

I am still not sure, whether to start replication because, master on OSX uses lower_case_table_names = 2, where as linux slave uses lower_case_table_names=1.

disply one row at a time with no repitition

Posted: 27 Feb 2013 01:43 PM PST

I have a database full of data, i would like to display one row form the database at time because the information in the database is confidential hence only one result can be seen at time, and that displayed row should not be displayed again. Its more of a ticketing system. The row data is returned to a php file which dispalys it

Is there a better option than Union All for multiple selects from the same row?

Posted: 27 Feb 2013 01:57 PM PST

Example Schema:

CREATE TABLE [dbo].[Base](  [ID] [int] IDENTITY(1,1) NOT NULL,  [Option1ID] [int] NULL,  [Option2ID] [int] NULL,  [Option3ID] [int] NULL,  [Option1Name] [varchar] NULL,  [Option2Name] [varchar] NULL,  [Option3Name] [varchar] NULL,  [Option1LName] [varchar] NULL,  [Option2LName] [varchar] NULL,  [Option3LName] [varchar] NULL,)  

Is there a way to get results that show up like:

ID | OptionID | OptionName | OptionLName  

I have tried to achieve this using UNION ALL but this means going over the same row 3 times in my example, in my real problem I have to do it 10 times. I cannot normalize the table due to legacy code. Is there a way to only go over the Base row once?

Is there a tuning parameter for MySQL that allows you to set an on-disk gap between non-sequential primary keys?

Posted: 27 Feb 2013 01:01 PM PST

I understand that one of the primary issues with using a non-sequential primary key is that every disk/memory write that does not come after the last known key causes a re-write of everything from the point of insertion on.

I'm looking at using a modified uuid that will be semi-sequential--ie, a scope-related number will be prepended to the uuid in order to provide some gap optimization for data in a common scope.

What would be ideal is to force InnoDB to write rows a certain distance apart based on A) the sequence of known keys, and B) table growth estimates.

A simplified example is this:

Disk w/o this tuning:

|< start of disk  | row 2592  | row 33093  | row 34928  | row 50983  | row 390853  | row 391985  |  | free space ...  |  |< end of disk  

Disk with this tuning:

|< start of disk  | row 2592  |  |  |  | row 33093  |  | row 34928  |  |  | row 50983  |  |  |  | row 390853  |  | row 391985  |  |< configurable end of allocated disk  |   | free space ...  |  |< end of disk  

In the second case, most writes would not require a re-write. Even if you run out of space between, we could then do a full re-alocation and cut the number of row re-writes down by a considerable factor.

Does something like this exist?

Create index if it does not exist

Posted: 27 Feb 2013 02:22 PM PST

I am working on a function that allows me to add an index if it does not exist. I am running into the problem that I cannot get a list of indexes to compare to. Any thoughts?

This is a similar issue to the column creation one that is solved with this code: http://stackoverflow.com/a/12603892/368511

How to get non repeat results

Posted: 27 Feb 2013 01:21 PM PST

I've two tables default_tb_persona and default_tb_envio, this are the DDL for those tables:

CREATE TABLE `default_tb_envio` (    `codigo` varchar(7) NOT NULL,    `fecha` date NOT NULL,    `id_modalidad` int(10) NOT NULL,    `costo` decimal(28,6) NOT NULL,    `id_agencia` int(10) DEFAULT NULL,    `no_envio` int(10) NOT NULL,    `id_forma_pago` int(10) DEFAULT NULL,    `bl` text,    `id_destino` int(10) DEFAULT NULL,    `id_persona_envia` varchar(255) DEFAULT NULL,    `id_persona_recibe` varchar(255) DEFAULT NULL,    `id_organismo` int(10) DEFAULT '1',     PRIMARY KEY (`codigo`),     UNIQUE KEY `envio_pkey` (`codigo`),     KEY `fki_` (`id_persona_envia`),     KEY `fki_1` (`id_persona_recibe`),     KEY `envio_modalidad_fkey` (`id_modalidad`),     KEY `id_foma_pago_fkey` (`id_forma_pago`),     KEY `id_organismo_fk` (`id_organismo`),     KEY `ndestino_fkey` (`id_destino`),     KEY `tb_envio_id_agencia_fkey` (`id_agencia`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;    CREATE TABLE `default_tb_persona` (    `nombre_apellido` varchar(255) NOT NULL,    `ci` varchar(15) NOT NULL,    `telefono` varchar(255) DEFAULT NULL,    `direccion` text,    `correo` varchar(255) DEFAULT NULL,    PRIMARY KEY (`ci`),    UNIQUE KEY `tb_persona_pkey` (`ci`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;  

I'm trying to get non equal or repeat results from default_tb_persona using a JOIN sentence, this is what I've made for that:

SELECT DISTINCT * FROM     `default_tb_persona`   LEFT JOIN     `default_tb_envio`   ON     `default_tb_persona`.`ci` = `default_tb_envio`.`id_persona_envia`  

But I'm getting repeated results. Is that query right? If I have two records pointing to the same record on tb_persona should the query return those two records? I've a test data if any can help me, just say me where to send and I will to solve this problem. Also it's possible to get only fields from default_tb_persona and not from both tables?

Error on data node startup in MySQL-cluster

Posted: 27 Feb 2013 02:58 PM PST

Running a 24 core server (with HT enabled) I tried increasing MaxNoOfExecutionThreads to 24 to perform some benchmarks and got the following error:

2013-02-27 11:21:44 [ndbd] INFO -- Trying to start 12 LQH workers with only 4 log parts, try initial node restart to be able to use more LQH workers.

Here are options from config.ini I find relevant:

NoOfFragmentLogFiles=64

FragmentLogFileSize=128M

It seems that Redo Log file number (not fragment) is always four and they are created in D9-D11 directory on the FS. I haven't been able to find a parameter to change this.

Version running is: 5.5.29-ndb-7.2.10

Any help or experience is welcome.

EDIT - solution: The following option defines the number of Redo log files - NoOfFragmentLogParts

After setting it to 12, you get D8-D19 for Redo Logs and 24 as MaxNoOfExecutionThreads works fine.

How might a corrupt partition in TempDB result in DBCC CHECKDB reporting no issue?

Posted: 27 Feb 2013 01:12 PM PST

One of our SQL Servers reported the following error recently:

DATE/TIME:  2/25/2013 9:15:14 PM    DESCRIPTION:    No catalog entry found for partition ID 9079262474267394048       in database 2. The metadata is inconsistent. Run DBCC CHECKDB to check for       a metadata corruption.  

Less than 15 minutes later I connected to the server and ran:

SELECT name  FROM sys.databases  WHERE database_id = 2;  

Which returned 'tempdb'. I then ran:

DBCC CHECKDB ('tempdb') WITH NO_INFOMSGS, TABLERESULTS;  

Which returned no results, indicating no issues with the database affected.

How could corruption in the database result in the error message above yet DBCC CHECKDB not report the problem? I presume if a page checksum calculation fails, resulting in the page being marked as suspect that any object referencing that page would not be able to be dropped, but I must be wrong.

Once a page is marked 'suspect', how can it be marked not-suspect, or fixed, or reused, or whatever such that DBCC CHECKDB does not report any problem with the page in question?


Edit: 2013-02-27 13:24

Just for fun, I tried to recreate the corruption in TempDB assuming a #temp table was the culprit.

However, since I cannot set the SINGLE_USER option in TempDB, I cannot use DBCC WRITEPAGE to corrupt a page, and therefore I cannot force corruption in TempDB.

Instead of using DBCC WRITEPAGE one could set the database offline and use a hex editor to modify random bytes in the db file. Of course, that does not work either on TempDB since the database engine cannot run with TempDB offline.

If you stop the instance, TempDB is automatically recreated at next startup; hence that won't do the trick either.

If anyone can think of a way to recreate this corruption, I'd be willing to do further research.

In order to test the hypothesis that a corrupted page cannot be fixed by DROP TABLE I created a test database and used the following script to corrupt a page, then attempt to drop the affected table. Result here was the table could not be deleted; I had to RESTORE DATABASE Testdb PAGE = ''... in order to recover the affected page. I assume if I had made a change to some other part of the page in question, perhaps the page could have been corrected with DROP TABLE or perhaps TRUNCATE table.

/* ********************************************* */  /* ********************************************* */  /* DO NOT USE THIS CODE ON A PRODUCTION SYSTEM!! */  /* ********************************************* */  /* ********************************************* */  USE Master;  GO  ALTER DATABASE test SET RECOVERY FULL;  BACKUP DATABASE Test       TO DISK = 'Test_db.bak'      WITH FORMAT          , INIT          , NAME = 'Test Database backup'          , SKIP          , NOREWIND          , NOUNLOAD          , COMPRESSION          , STATS = 1;  BACKUP LOG Test      TO DISK = 'Test_log.bak'      WITH FORMAT          , INIT          , NAME = 'Test Log backup'          , SKIP          , NOREWIND          , NOUNLOAD          , COMPRESSION          , STATS = 1;  GO  ALTER DATABASE test SET SINGLE_USER;  GO  USE Test;  GO  IF EXISTS (SELECT name FROM sys.key_constraints WHERE name = 'PK_temp')       ALTER TABLE temp DROP CONSTRAINT PK_temp;  IF EXISTS (SELECT name FROM sys.default_constraints       WHERE name = 'DF_temp_testdata')       ALTER TABLE temp DROP CONSTRAINT DF_temp_testdata;  IF EXISTS (SELECT name FROM sys.tables WHERE name = 'temp')   DROP TABLE temp;  GO  CREATE TABLE temp  (      tempID INT NOT NULL CONSTRAINT PK_temp PRIMARY KEY CLUSTERED IDENTITY(1,1)      , testdata uniqueidentifier CONSTRAINT DF_temp_testdata DEFAULT (NEWID())  );  GO    /* insert 10 rows into #temp */  INSERT INTO temp default values;  GO 10     /* get some necessary parameters */  DECLARE @partitionID bigint;  DECLARE @dbid smallint;  DECLARE @tblid int;  DECLARE @indexid int;  DECLARE @pageid bigint;  DECLARE @offset INT;  DECLARE @fileid INT;    SELECT @dbid = db_id('Test')      , @tblid = t.object_id      , @partitionID = p.partition_id      , @indexid = i.index_id  FROM sys.tables t      INNER JOIN sys.partitions p ON t.object_id = p.object_id      INNER JOIN sys.indexes i on t.object_id = i.object_id  WHERE t.name = 'temp';    SELECT TOP(1) @fileid = file_id   FROM sys.database_files;    SELECT TOP(1) @pageid = allocated_page_page_id   FROM sys.dm_db_database_page_allocations(@dbid, @tblid, null, @partitionID, 'LIMITED')  WHERE allocation_unit_type = 1;    /* get a random offset into the 8KB page */  SET @offset = FLOOR(rand() * 8192);  SELECT @offset;    /* 0x75 below is the letter 't' */  DBCC WRITEPAGE (@dbid, @fileid, @pageid, @offset, 1, 0x74, 1);      SELECT * FROM temp;    Msg 824, Level 24, State 2, Line 36  SQL Server detected a logical consistency-based I/O error: incorrect checksum   (expected: 0x298b2ce9; actual: 0x2ecb2ce9). It occurred during a read of page    (1:1054) in database ID 7 at offset 0x0000000083c000 in file 'C:\SQLServer   \MSSQL11.MSSQLSERVER\MSSQL\DATA\Test.mdf'.  Additional messages in the SQL    Server error log or system event log may provide more detail. This is a   severe error condition that threatens database integrity and must be   corrected immediately. Complete a full database consistency check   (DBCC CHECKDB). This error can be caused by many factors; for more   information, see SQL Server Books Online.  

At this point you get disconnected from the database engine, so reconnect to continue.

USE Test;  DBCC CHECKDB WITH NO_INFOMSGS, TABLERESULTS;  

Corruption is reported here.

DROP TABLE temp;    Msg 824, Level 24, State 2, Line 36  SQL Server detected a logical consistency-based I/O error: incorrect checksum   (expected: 0x298b2ce9; actual: 0x2ecb2ce9). It occurred during a read of page    (1:1054) in database ID 7 at offset 0x0000000083c000 in file 'C:\SQLServer   \MSSQL11.MSSQLSERVER\MSSQL\DATA\Test.mdf'.  Additional messages in the SQL    Server error log or system event log may provide more detail. This is a   severe error condition that threatens database integrity and must be   corrected immediately. Complete a full database consistency check   (DBCC CHECKDB). This error can be caused by many factors; for more   information, see SQL Server Books Online.  

Corruption is reported here, DROP TABLE fails.

/* assuming ENTERPRISE or DEVELOPER edition of SQL Server,      I can use PAGE='' to restore a single page from backup */  USE Master;  RESTORE DATABASE Test PAGE = '1:1054' FROM DISK = 'Test_db.bak';   BACKUP LOG Test TO DISK = 'Test_log_1.bak';    RESTORE LOG Test FROM DISK = 'Test_log.bak';  RESTORE LOG Test FROM DISK = 'Test_log_1.bak';  

Edit #2, to add the @@VERSION info requested.

SELECT @@VERSION;  

Returns:

Microsoft SQL Server 2012 (SP1) - 11.0.3000.0 (X64)       Oct 19 2012 13:38:57       Copyright (c) Microsoft Corporation      Enterprise Evaluation Edition (64-bit) on Windows NT 6.2 <X64>           (Build 9200: )  

I know this is the Evaluation Edition, we have keys for the Enterprise Edition, and will be doing an Edition Upgrade soon.

Is a join optimized to a where clause at runtime?

Posted: 27 Feb 2013 06:02 PM PST

When I write a query like this...

select *  from table1 t1  join table2 t2  on t1.id = t2.id  

Does the SQL optimizer, not sure if that is the correct term, translate that to...

select *  from table1 t1, table2 t2  where t1.id = t2.id  

Essentially, is the Join statement in SQL Server just an easier way to write sql? Or is it actually used at run-time?

Edit: I almost always, and will almost always, use the Join syntax. I am just curious what happens.

managing a growing database in the long run (10 years from now)

Posted: 27 Feb 2013 01:01 PM PST

I'm close to finishing-up a WEB app (PHP + mysql website portal), which will start slowly, as in uploading few pictures and inserting few rows in main database table, but soon, and over few years, I expect it to grow steadily and even logaritmically at some point.

By my Excel calculations, at some point, the database table data + indexes (innoDB) will overgrow the size of hosted pictures by a factor of 10, but in the beginning it will be the other way round.

So basically my portal hosts pictures (medium size jpegs let's say) and uses MySql InnoDB database tables.

My question to you all admins and professionals is about my MySql Database.

I'm thinking in advance so I don't get caught unprepared :

When my database will grow more and even get 1.2TB or 2TB or more....how can I deal with that ? Can I say split it on 2 or more dedicated machines so searches will still be fast and getting data from this big database ? Or shall I do 1 server and pump it up with a lot of cheap 7200 rpm sata ? or what can be the solution to a growing database.

My first configuration for this web-app is a VPS - SSD hosting with a 4ghz processor, 3GB of ram and 2x60 GB SSD raid 1.

thanks for your answers,

Pull Access schema into a SQL Server table

Posted: 27 Feb 2013 08:25 PM PST

We had an assembly that used to pull a list of tables and their columns from an Access database and place the listing into a SQL Server table for later use. For various reasons the assembly is no longer working and reworking it is not an option at this time.

Is there a way to pull a listing of table names and columns from a linked server Access database? If not, is it possible to pull this information from an SSIS package? The output I need is really simple:

Customer    First Name  Customer    Last Name  Customer    Email  Product     Product ID  Product     Product Name  ...  

SQL Server 2012 slower than 2008

Posted: 27 Feb 2013 02:19 PM PST

I migrated a large website and database from an older server (Windows 2008 / SQL Server 2008 / 16 GB RAM / 2 x 2.5 GHz Quad Core / SAS disks) to a newer, much better server (Windows 2008 R2 / SQL Server 2012 SP1 / 64 GB RAM / 2 x 2.1 GHz 16 Core processors / SSD disks).

I detached the database files on the old server, copied and attached them on the new server. Everything went very well.

After that, I changed to compatibility level to 110, updated statistics, rebuild indexes.

To my huge disappointment, I noticed that most sql queries are much slower (2-3-4 times slower) on the new SQL 2012 server than on the old SQL 2008 server.

For example, on a table with around 700k records, on the old server a query on index took around 100ms. On the new server, the same query takes around 350 ms.

Same happens for all queries.

I would appreciate some help here. Let me know what to check/verify. Because I find it very hard to believe that on a better server with a newer SQL Server, the performance is worse.

More details:

Memory is set to max.

I have this table and index:

CREATE TABLE [dbo].[Answer_Details_23](      [ID] [int] IDENTITY(1,1) NOT NULL,      [UserID] [int] NOT NULL,      [SurveyID] [int] NOT NULL,      [CustomerID] [int] NOT NULL default 0,      [SummaryID] [int] NOT NULL,      [QuestionID] [int] NOT NULL,      [RowID] [int] NOT NULL default 0,      [OptionID] [int] NOT NULL default 0,      [EnteredText] [ntext] NULL,   CONSTRAINT [Answer_Details_23_PK] PRIMARY KEY NONCLUSTERED   (      [ID] ASC  )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]  ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]    CREATE NONCLUSTERED INDEX [IDX_Answer_Details_23_SummaryID_QuestionID] ON [dbo].[Answer_Details_23]  (      [SummaryID] ASC,      [QuestionID] ASC  )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]  

I executed this query:

set statistics time on;  select summaryid, count(summaryid) from Answer_Details_23 group by summaryid order by count(summaryid) desc;  set statistics time off;  

OLD SERVER - SQL Server Execution Times: CPU time = 419 ms, elapsed time = 695 ms.

NEW SERVER - SQL Server Execution Times: CPU time = 1340 ms, elapsed time = 1636 ms.

EXECUTION PLANS uploaded here: http://we.tl/ARbPuvf9t8

How to pass a table type with an array field to a function in postgresql

Posted: 27 Feb 2013 01:49 PM PST

i have a table called book

CREATE TABLE book  (    id smallint NOT NULL DEFAULT 0,           bname text,           btype text,    bprices numeric(11,2)[],    CONSTRAINT key PRIMARY KEY (id )  )  

and a function save_book

CREATE OR REPLACE FUNCTION save_book(thebook book)    RETURNS text AS  $BODY$  DECLARE   myoutput text :='Nothing has occured';  BEGIN        update book set       bname=thebook.bname,      btype=thebook.btype,bprices=thebook.bprices  WHERE id=thebook.id;        IF FOUND THEN          myoutput:= 'Record with PK[' || thebook.id || '] successfully updated';          RETURN myoutput;      END IF;        BEGIN          INSERT INTO book values(thebook.id,thebook.bname,thebook.btype,          thebook.bprices);          myoutput:= 'Record successfully added';                 END;   RETURN myoutput;        END;  $BODY$    LANGUAGE plpgsql VOLATILE    COST 100;  

now when i call the function

SELECT save_book('(179,the art of war,fiction,{190,220})'::book);  

i get the error

ERROR: malformed array literal: "{190"  SQL state: 22P02  Character: 18  

i dont understand because i dont see any error in the format of the array, any help?

Select the top 5 records and the specific column from each table in the same database

Posted: 27 Feb 2013 01:16 PM PST

I have 50+ tables in a database.

I don't want to use the following SQL on each table in a database.

select top 10 * from {table_name}  

So, is it possible to display the first 10 records PLUS only show the 2nd and 3rd column for each table inside the same database using sql?

(If the table does not have the 2 columns - it will display the 1st and 2nd column. If the table have 1 column, it will display the 1st column. If the table have more than 2 column, it will display the 2nd and 3rd column. If the table does not have any column, then display nothing.)

How to select the first 10 records for each table in the same database using sql

Posted: 27 Feb 2013 12:37 PM PST

I have 50+ tables in a database.

I don't want to use the following SQL on each table in a database.

select * from {table_name}  

So, is it possible to display the first 10 records for each table inside the same database using sql?

Specifying the failover partner when using Availability Groups

Posted: 27 Feb 2013 01:02 PM PST

We have a situation where we are using replicated databases in an environment that's making use of Availability Groups. As we wanted no manual intervention when failing over, we decided to set up subscriptions to write replicated data to both of our replica servers. As these databases will only ever be read from, this architecture should be ok.

To further configure this and after reading this article by Brent Ozar, we set the connection strings in our apps that used said databases with a failover partner such that if we lost our primary replica then our app would still work.

After a recent test, failover and fail back, we have now started receiving some intermittent errors from our clients along the lines of:

System.InvalidOperationException:  Server xxx, database xxx is not configured for database mirroring.  

At present I am confused as to why this error is appearing. There has been some suggestions of adding a connection timeout to the connection string, however that doesn't make sense to me as if the primary timed out then reading from the secondary should work, and in our case is valid as it's an exact copy.

If anyone has any ideas I would be grateful.

EDIT: Further to this I have now removed the failover partner from the connection string and left it pointing at the DB on the replica, however we get errors of:

System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - Only one usage of each socket address (protocol/network address/port) is normally permitted.)

Its like the client has cached the secondary server somewhere?


Edit: in response to Brent's answer

Hi Brent thanks for that, however I wasn't clear in my initial description. The replicated databases are not part of an availability group but instead I have a subscription on each node because, as I understand it, replicated DBs are not well supported in AGs.

I was therefore trying to come up with a way to avoid manual intervention for these DBs should a failure of the replica occur and the other databases which are part of an AG failover.

I was thinking that while Node A was up all connections would go to that server, but if Node A failed then connections would go to Node B when using Failover Partner. However this does not seem to be the case and actually according to this article:

http://msdn.microsoft.com/en-gb/library/system.data.sqlclient.sqlconnection.connectionstring.aspx

If you specify a failover partner but the failover partner server is not configured for database mirroring and the primary server (specified with the Server keyword) is not available, then the connection will fail.

This Failover Partner configuration seems not to be supported for how I am attempting to use it, but leaves me with a situation where manual intervention is required for any failover :( unless I am missing something?

innodb changing row format doesn't decrease table size?

Posted: 27 Feb 2013 06:28 PM PST

We're currently using MySQL with innodb and we have some large tables that are compact in row format. When I change the row format to compressed we are still seeing the same size for the table. Anyone know the reason for this?

Can I use a foreign key index as a shortcut to getting a row count in an INNODB table?

Posted: 27 Feb 2013 01:34 PM PST

I have a table that has a large number of rows in it.

The primary key (an auto-incrementing integer) is, by default, indexed.

While waiting for a row count to be returned I did an EXPLAIN in another window and the the results were as follows:

mysql> SELECT COUNT(1) FROM `gauge_data`;  +----------+  | COUNT(1) |  +----------+  | 25453476 |  +----------+  1 row in set (2 min 36.20 sec)      mysql> EXPLAIN SELECT COUNT(1) FROM `gauge_data`;  +----+-------------+------------+-------+---------------+-----------------+---------+------+----------+-------------+  | id | select_type | table      | type  | possible_keys | key             | key_len | ref  | rows     | Extra       |  +----+-------------+------------+-------+---------------+-----------------+---------+------+----------+-------------+  |  1 | SIMPLE      | gauge_data | index | NULL          | gauge_data_FI_1 | 5       | NULL | 24596487 | Using index |  +----+-------------+------------+-------+---------------+-----------------+---------+------+----------+-------------+  1 row in set (0.13 sec)  

Since the primary key is guaranteed to be unique, can I just take the number of rows from the EXPLAIN and use that as the row count of the table?

BTW, I believe the difference in numbers is due to the fact that more data is continually being added to this table.

The server principal "<mydbuser>" is not able to access the database "<mydatabase>" under the current security context

Posted: 27 Feb 2013 02:28 PM PST

When I try to install a new database, I'm getting this error from SQL Server 2008.

The server principal "<mydbuser>" is not able to access the database "<mydatabase>" under the current security context.  

I looked it up and it says it has something to do with orphaned users when doing a restore. This connection has nothing to do with a restore, it's a brand new database, user and connection, so why am I getting this problem?

Also the process I'm using to create the database and user/login, is the same for all of my databases, so why does this occur sometimes?

Any ideas?

Thanks.

EDIT: per request here's some code that fails, but understand that this works with every other application I've built. Also this is built into an ASP.NET MVC application.

System.Web.Management.SqlServices.Install(      config.Database.DataSource.Trim(),       config.Database.UserName,       config.Database.Password,       config.Database.InitialCatalog,       System.Web.Management.SqlFeatures.All);      if (!Roles.RoleExists(role.Name))      Roles.CreateRole(role.Name);  <-- this fails  

Oh, another thing that doesn't make sense is that the database in the error message is not the database I'm trying to install to. It is one of the databases in this SQL Server instance (along with a bunch of others) but it's completely different. I can't figure out where it's getting this.

Example: the database I'm trying to install to is 'Rapper' and the database shown in the error message is 'GPS'. Why?

Does SQL Server read all of a COALESCE function even if the first argument is not NULL?

Posted: 27 Feb 2013 12:48 PM PST

I'm using a T-SQL COALESCE function where the first argument will not be null on about 95% of the times it is ran. If the first argument is NULL, the second argument is quite a lengthy process:

SELECT COALESCE(c.FirstName                  ,(SELECT TOP 1 b.FirstName                    FROM TableA a                     JOIN TableB b ON .....)                  )  

If, for example, c.FirstName = 'John', would SQL Server still run the sub-query?

I know with the VB.NET IIF() function, if the second argument is True, the code still reads the third argument (even though it won't be used).

No comments:

Post a Comment

Search This Blog