Monday, July 29, 2013

[how to] sql server 2008 R2 Cannot connect to my local server

[how to] sql server 2008 R2 Cannot connect to my local server


sql server 2008 R2 Cannot connect to my local server

Posted: 29 Jul 2013 09:23 PM PDT

When I connect to my local server , I got this message

A network-related or instance-specific error occurred while establishing a connection to    SQL Server. The server was not found or was not accessible. Verify that the instance name     is correct and that SQL Server is configured to allow remote connections. (provider:     Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft     SQL Server, Error: 2)  

this is my login information :

Server type :       Database Engine  Servar name :       .  Authentication:     SQL Server Authentication  Login:              sa  Password:           123  

How can I fix it ? Thanks in advance :)

MySQL slave doesn't reconnect to master after master is rebooted

Posted: 29 Jul 2013 04:38 PM PDT

I've got a new MySQL replication setup with one master and one slave.

In all of the following scenarios, the slave automatically reconnects to the master and resumes replication where it left off:

  • mysql service is restarted on the slave host
  • Slave host is rebooted
  • mysql service is restarted on the master host (reconnection happens after the default 60-second slave I/O reconnect timeout)

However, if I try rebooting the master host, the mysql service on the slave host neither detects that the master went down nor resumes replication when the master becomes available again. In this scenario:

  • I see no entries in the mysql error log on the slave host
  • Running show slave hosts; on master (once it comes back up) shows no hosts connected

UPDATE: Looking more closely at the mysql errors logs on the master, I suspect that the master is not completing a clean shutdown because when stopping the service I see the logs:

2013-07-29 17:43:45 5281 [Note] /usr/sbin/mysqld: Normal shutdown    2013-07-29 17:43:45 5281 [Note] Giving 1 client threads a chance to die gracefully  [snip]  2013-07-29 17:43:48 5281 [Note] InnoDB: Shutdown completed; log sequence number 146430781066  [snip]  2013-07-29 17:43:48 5281 [Note] Shutting down plugin 'binlog'  2013-07-29 17:43:48 5281 [Note] /usr/sbin/mysqld: Shutdown complete    130729 17:43:48 mysqld_safe mysqld from pid file /opt/mysql/run/mysqld.pid ended  

...while after rebooting, the mysql error log on master ends with:

2013-07-29 17:46:58 6441 [Note] InnoDB: Starting shutdown...  

...and is missing the Shutdown complete message.

Importing a large column (8000 chars) from Excel using SSIS

Posted: 29 Jul 2013 02:56 PM PDT

I'm trying to set up a regular import of an excel spreadsheet that we get from a vendor. I'm using SQL 2008 R2 SSIS to import it into a table. The problem connection manager is an OLE DB connection to the spreadsheet. The spreadsheet is Excel 2005. The database connection manager is using Native OLE DB\SQL Native Client.

The problem I'm getting is that the OLE DB Source keeps setting a couple of the excel columns to DT_WSTR with a 255 length. I have 2 columns however that are 4000 and 8000 characters respectively. I've gone into the Advanced Editor/Input and Output Properties tab for the OLE DB Source and changed the DataType/Length in the External Columns list and the Output Columns list. Unfortunately when I leave the Advanced Editor an error appears in the error list.

Validation error. Data Flow Task: Data Flow Task: The output column "ColumnName" (226)  on the error output has properties that do not match the properties of its   corresponding data source column.  

When I click on the Source again I get an option to "fix" the errors automatically. I then select "yes" and "ok". The error is now gone but when I go back to the External Columns the datatype/length settings are back to the original DT_WSTR/255.

I could manually change the spreadsheet into a delimited file to avoid the problem but would rather not add a manual step into the process. Does anyone know of a way to get an Excel source to allow for a long column?

How to avoid timeouts during big DML operations

Posted: 29 Jul 2013 02:07 PM PDT

I have a big insert script I need to run. Its about 55,000 records and 160 columns. The script is already created and I can't create it again.

The problem I have is that this runs for about 4 hours or so, and during that time the system that uses this database gets really slow and timeout a lot.

I would not care if my INSERT is slower but it shouldn't impact other users.

I was thinking in doing some batch of let's say 500 rows and use the WAITFOR, but was wondering if there could be a better option for doing this.

Goverment border control database solution [on hold]

Posted: 29 Jul 2013 02:02 PM PDT

I am on a research for border control IT solution. Do you recommend any big databases administration and security companies ?

I'd really appreciate it.

Tablix group totals not displaying consistently

Posted: 29 Jul 2013 08:47 PM PDT

I admit I'm pretty new to Reporting Services, but it has been fairly elegant and straight forward up until now.

I have a report with multiple parameters that returns a dataset that is then grouped into three groups. The Row groups for the report is laid out as:

Group 1    Group 2      Group 3        (Details)

To each group I've added a totals row that subtotal two columns for the group.

On running the report with a relatively small result set everything appears to work fine, but as the result set gets larger the totals columns for various groups stop appearing. A set of group totals may show up at the end of the first grouping and then not again until the end of the report.

The behavior is consistent for a given query (i.e. the vanishing totals always appear, or not appear, in the same place). I've tried reworking the query the report is based on, I've gone so far as to rebuild the report in a new project, and the issue doesn't go away.

I haven't been able to find any documentation on this issue and I'm out of ideas on how to fix.

Any suggestions (even if it's RTFM)?

Need to understand parallel query execution error

Posted: 29 Jul 2013 06:02 PM PDT

Today we experienced a degradation in performance on our production sql server. Durring the time this occurred we logged several "The query processor could not start the necessary thread resources for parallel query execution" errors. The reading that I've done suggests that this has to do with how many CPUs to use when executing a complex query. However when I checked during the outage our CPU Utilization was only at 7%. Is there something else this could be referring too that I haven't come across yet? Is this a likely culprit of the performance degradation or am I chasing a red herring?

My sp_configure values for this are as follows:

name                                minimum maximum config_value run_value  cost threshold for parallelism      0       32767   5            5  

how to check memory utilization

Posted: 29 Jul 2013 12:02 PM PDT

Is there a way to check how much of the memory being reserved by sql server is actually being used by sql server?

Is there a way to look at this information in the past? Ie how much memory was being used an hour ago?

Cannot rename the table because it is published for replication

Posted: 29 Jul 2013 10:19 AM PDT

I need to rename some tables in my DB - SQL Server 2005, with this:

EXEC sp_rename 'dbo.TableName', 'Tmp_TableName', 'OBJECT'  

but I can't do this - I get this exception -

Cannot rename the table because it is published for replication

This DB doesn't have replication - it was removed 5 years ago - we've just tried it and removed it after a week. So I suppose that it should be a flag somewhere.

Please help me remove this flag somehow.

Is the Service Broker good for publishing data between servers?

Posted: 29 Jul 2013 09:43 AM PDT

I'm trying to find a better solution when publishing data to a remote table.

So we have Server A and Server B. Server A holds a master table which should be replicated to Server B.

Currently, we have this pretty chaotic stored procedure that runs every minute or so. It locates any changes and inserts them into a temporary table on Server B. Once loaded, it inserts new records or updates existing records from this temp table (The table is hit constantly and we want to limit any for of lock).

We don't have a DBA currently so I'm trying my best (I'm just a web developer) to figure out better solutions that could scale in the future.

Note: We have a couple dozen stored procedures that are like this.

always on availability group with different speed disks

Posted: 29 Jul 2013 09:49 AM PDT

I'm looking to set up SQL Server 2012 installation with an Always On Availability Group, where the 'passive' replica will be hosted at another site on the WAN and using synchronous data commit - the idea being that we will have a hot standby with no loss of data in the event of a failure at our primary site.

One potential problem that I foresee is that the secondary site has slower storage than our primary site. I don't care about that in the event of a failure, we can live with slow speeds for a period of time until the primary site is restored. My worry is that, because we are using synchronous commit, that the slower disk speed at the secondary site will affect performance at the primary site during normal operation.

Is this a valid concern, or is it likely that the slower speed will be offset by, for example, the disk not having much read activity in comparison to the primary site?

Cannot remove unused filegroups

Posted: 29 Jul 2013 09:52 AM PDT

I wanted to remove some unused filegroups/files in a SQL Server Database but am stuck because SQL Server thinks the filegroups/files are still in use.

Some background:

  • We had some partioned tables that we converted back to non-partitioned ones
  • All partition-functions and -schemes were deleted
  • I queried the dm views for unused filegroups like this:

    SELECT   *  FROM sys.filegroups fg  LEFT OUTER JOIN   sysfilegroups sfg  ON fg.name = sfg.groupname  LEFT OUTER JOIN   sysfiles f  ON sfg.groupid = f.groupid  LEFT OUTER JOIN   sys.indexes i  ON fg.data_space_id = i.data_space_id  WHERE i.object_id IS NULL  

    This got me a list of filegroups/files i tried to remove from the database. But some of them could not be removed. Example error message:

    The filegroup 'FG_XXXX' cannot be removed because it is not empty.  
  • I was not able to get the connection to tables/indexes for these filegroups via sys.data_spaces and sys.indexes:

    SELECT * FROM  sys.data_spaces ds  INNER JOIN sys.indexes i  ON ds.data_space_id = i.data_space_id  WHERE ds.name = 'FG_XXXX'  
  • I am aware that a filegroup can not be removed if it is referenced in some partition scheme.
    But this cannot be the reason here, as i deleted all partition schemes/functions in the db.

Any hints what i could do to get rid of the filegroups?

Full text query slow on first run

Posted: 29 Jul 2013 05:58 PM PDT

I have SQL Server database which has size of 65GB and 6 million rows. We have essays in tables and the average size of an essay is 450 words. When I run a query for the first time, it takes 3-10 seconds to show me the results. On running same query again, it is in less than 1 second (very fast).

Please let me know how to optimize it so I can get queries in less than 1 sec.

Query example:

SELECT top 1 * FROM tblContent WHERE CONTAINS(pagecontent,'"increasing the likelihood"')  

Here is detail of server(VPS):

  • OS: Windows Server 2012
  • SQL Server 2012 Enterprise (Trial)
  • Processor: set to 0
  • Memory: set to 0
  • RAM: 4GB
  • Processor: QUAD 4Ghz

Splitting different parts of a string

Posted: 29 Jul 2013 08:00 AM PDT

I have an SQL Server database where there is a cell with a delimiter (\) separated string. An example of this string would be:

category_path  =============  RootCategory\Middle Category\Child Category\Child of child category  RootCategory\Middle Category\Other Child Category\  

There are a lot of category paths like this. I would like to parse all of them into an other table:

category_name            parent_path  ====================================  RootCategory             null  Middle Category          RootCategory  Child Category           RootCategory\Middle Category  Child of child category  RootCategory\Middle Category\Child Category  Other Child Category     RootCategory\Middle Category  

I have left out the duplicates here, but the result could contain duplicates (I will need to cursor over the rows of this table later, and I can call distinct there).

There are a lot of examples around the net where a string is simply splitted. (Here for example) I was thinking about writing a query where I split the string, cursor over the results and accumlate the parent_path in each step, but that seems to be very suboptimal.

Is there a solution where I wouldn't have to declare so many cursors for a simple string?

Mysqldump option for crash recovery? [on hold]

Posted: 29 Jul 2013 07:52 AM PDT

Question

Is there any command in mysqldump that can be used for crash recovery?

Background

I got this question in an interview where the interviewer said that there is an option in mysqldump which will help us to build the system from crash.

I went through the documentation but could not find any such option. Please let me know if any such thing is there. I did not pass the interview.

A Tool to Manage Tracking Query Performance Improvements [duplicate]

Posted: 29 Jul 2013 10:20 AM PDT

This question already has an answer here:

I'm building a BI system on SQL Server 2012. I have a test set of data with some tens of millions of rows.

Currently many functions and stored procedures are unacceptably slow. I'm looking for a tool I can use to automatically compare query times from previous executions as I run and re-run these queries and make performance improvements.

I want to tinker, measure, compare execution times, repeat.

For example, I want to be able to see a report showing what query times were when run on 7/31/2013. Then after I make some changes, I want to kick off the tool again on say, 8/1/2013. I'd like to see side-by-side the difference in query times between all of the historical executions.

I know I can track query times manually with SET STATISTICS TIME ON. I've also seen stored procedures others have written to manually track the performance of one or two queries. But the process is very manual.

I'm looking for a tool that I can enter the 30 functions and stored procedures I want to test into (or give it a trace), and then it will do the work of kicking of the queries 5 times (or replaying the trace), record the average time each query took to execute, and compare those times to previous executions.

Regarding Similar Questions

I've looked at:

  1. Testing stored procedure scalability - My question is not about stress testing. (Although the RML tools mentioned are VERY close to what I'm looking for. If all else fails, I will use the RML tools.)
  2. How to profile stored procedures - I know how to profile a stored procedure and use tools like SQL Profiler. Profiler won't give me comparisons to previous executions.

The RML tools are very close to what I want, as one can load traces into multiple DBs and view the results side by side.

But is there any tool that automates: replaying sql statements and viewing the performance diff of different iterations side-by-side? (The major limitation with the RML tools is that the trace from each iteration has to be loaded into a different database and the side-by-side comparison is only possible by opening two instances of Reporter and switching back and forth between the two of them. Ten different databases and ten different windows for ten different executions seems unwieldy...)

In Microsoft SQL Server 2008, syntax generates the error "The Parallel Data Warehouse (PDW) features are not enabled."

Posted: 29 Jul 2013 05:45 PM PDT

I have the following virtual column generated from an aggregate over a sorted partition,

MIN(picture_id) OVER ( PARTITION BY [360_set] ORDER BY picture_id ASC )  

However, when I execute that, I get the following.

Msg 11305, Level 15, State 10, Line 12  The Parallel Data Warehouse (PDW) features are not enabled.  

This is where it gets interesting though, without a sort order on the partition, it works:

MIN(picture_id) OVER ( PARTITION BY [360_set] )  

And, further, ROW_NUMBER() a window function (not an aggregate function) works with an explicit order on the partition.

ROW_NUMBER() OVER ( PARTITION BY [360_set] ORDER BY picture_id ASC )  

How come the desired statement doesn't work? Where is this documented? The version information was requested, this is what I in Help → About.

Microsoft SQL Server Management Studio          10.0.5512.0  Microsoft Analysis Services Client Tools        10.0.5500.0  Microsoft Data Access Components (MDAC)         6.1.7601.17514  Microsoft MSXML                                 3.0 6.0   Microsoft Internet Explorer                     9.10.9200.16635  Microsoft .NET Framework                        2.0.50727.5472  Operating System                                6.1.7601  

The result from SELECT @@VERSION is Microsoft SQL Server 2008 (SP3) - 10.0.5512.0 (X64) Aug 22 2012 19:25:47 Copyright (c) 1988-2008 Microsoft Corporation Standard Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (VM)

SQL server level permissions

Posted: 29 Jul 2013 08:32 AM PDT

I am aware of the Microsoft provided code to transfer logins between SQL servers, however this only does the account and the password.

What if that particular account has various roles and permissions assigned to it at a server level, is there an equivalent piece of code to script these permissions also?

Thanks

How can I execute a stored procedure whose name is in a variable?

Posted: 29 Jul 2013 01:51 PM PDT

Given a variable that contains a stored procedure name:

DECLARE @stored_procedure_name varchar(512);  SET @stored_procedure_name = 'some_stored_procedure_name';  

How can I execute the stored procedure (without passing in any arguments)?

When creating remote BLOB store is "RBSFilestreamFile" always the name of the file to be added to the FILEGROUP?

Posted: 29 Jul 2013 05:44 PM PDT

When creating a remote BLOB store in SQL Server (2008 R2) is "RBSFilestreamFile" always the name of the file when adding it to the FILEGROUP like in this query (this is the name I've seen used in every example I've found online, but I need to know for sure)?

ADD FILE (name = RBSFilestreamFile, filename = 'c:\Blobstore')      TO FILEGROUP RBSFilestreamProvider  

I'm asking because I'm working on an application for restoring SharePoint content databases and need to know if I can hardcode this string into the application.

I know you can create file groups and files with any name you want, but specifically for setting up RBS with SharePoint using SQL 2008's built in FILESTREAM provider are these names an expected convention? i.e. Will it work if I name my FILEGROUP and/or FILE something else?

Restore exceeds licensed limit of 10240 MB per database. All I really need is the DB structure

Posted: 29 Jul 2013 12:10 PM PDT

I received a .bak file from a customer that I need to Restore, but the Restore exceeds licensed limit of SQL Server 2008 R2 10240 MB per database. All I really need is the DB structure. Is there a way to Restore anyway or for me to just Restore the structure?

Database migration, how to

Posted: 29 Jul 2013 12:03 PM PDT

I have two databases DB1 and DB2. In both databases exists this two tables mo_sms and mt_sms. This is the structure of those tables:

CREATE TABLE IF NOT EXISTS `mo_sms` (    `id_MO` int(11) unsigned NOT NULL AUTO_INCREMENT,    `sms_proceso` char(1) NOT NULL COMMENT 's=SMS c=CHAT d=Descargas',    `msisdn` varchar(20) NOT NULL,    `texto_sms` varchar(160) DEFAULT NULL,    `brand_id` int(10) unsigned NOT NULL,    `fecha_sms` datetime NOT NULL,    `comando_id` int(10) unsigned NOT NULL DEFAULT '0',    `alias_desc` varchar(25) DEFAULT NULL,    `shortcode_id` int(10) unsigned NOT NULL,    `precio` float(11,2) unsigned DEFAULT '0.00' COMMENT 'Precio del MO',    `id_user` int(10) unsigned NOT NULL,    `state` char(1) NOT NULL DEFAULT '0' COMMENT '0=Por procesar 1=Procesado',    `tipo_sms` tinyint(4) NOT NULL DEFAULT '0' COMMENT '0=Normal, <>0 dependera del tipopredeterminado',    `anio_sms` smallint(4) unsigned NOT NULL DEFAULT '0',    `mes_sms` smallint(2) unsigned zerofill NOT NULL DEFAULT '00',    PRIMARY KEY (`id_MO`),    KEY `ix_carrier` (`brand_id`),    KEY `ix_fecha_sms` (`fecha_sms`),    KEY `ix_fecha_carrier_keyword` (`fecha_sms`,`brand_id`,`alias_desc`),    KEY `ix_msisdn` (`msisdn`),    KEY `ix_sms_proceso` (`sms_proceso`),    KEY `ix_sms_proceso_state` (`sms_proceso`,`state`),    KEY `ix_id_user` (`id_user`),    KEY `ix_fecha_sms_user` (`fecha_sms`,`id_user`),    KEY `ix_varios` (`anio_sms`,`mes_sms`,`comando_id`,`shortcode_id`,`brand_id`)  ) ENGINE=MyISAM  DEFAULT CHARSET=utf8 COMMENT='Tabla de MO' AUTO_INCREMENT=82972 ;    CREATE TABLE IF NOT EXISTS `mt_sms` (    `id_MT` int(11) unsigned NOT NULL AUTO_INCREMENT,    `sms_proceso` char(1) NOT NULL DEFAULT 'c' COMMENT 's=SMS c=CHAT d=Descargas',    `msisdn` varchar(20) NOT NULL,    `texto_sms` varchar(160) DEFAULT NULL,    `brand_id` int(10) unsigned NOT NULL,    `fecha_sms` datetime NOT NULL,    `comando_id` int(10) unsigned NOT NULL DEFAULT '0',    `alias_desc` varchar(25) DEFAULT NULL,    `shortcode_id` int(10) unsigned NOT NULL,    `id_user` int(10) unsigned NOT NULL,    `tipo_sms` tinyint(4) unsigned NOT NULL DEFAULT '0' COMMENT '0=Normal, <>0 dependera del tipopredeterminado',    `id_MO` int(11) unsigned NOT NULL,    `state` char(1) DEFAULT '0' COMMENT '0=Por Procesar 1=Procesado',    `anio_sms` smallint(4) unsigned NOT NULL DEFAULT '0',    `mes_sms` smallint(2) unsigned zerofill NOT NULL DEFAULT '00',    PRIMARY KEY (`id_MT`),    KEY `ix_carrier` (`brand_id`),    KEY `ix_fecha_sms` (`fecha_sms`),    KEY `ix_fecha_carrier_keyword` (`fecha_sms`,`brand_id`,`alias_desc`),    KEY `ix_msisdn` (`msisdn`),    KEY `ix_sms_proceso` (`sms_proceso`),    KEY `ix_id_user` (`id_user`),    KEY `ix_fecha_sms_user` (`fecha_sms`,`id_user`)  ) ENGINE=MyISAM  DEFAULT CHARSET=utf8 COMMENT='Tabla de MT' AUTO_INCREMENT=93853;  

I have some values in DB2 that should be migrated (inserted) in DB1. My problem here is I don't know how to get the records from DB2.mo_sms tables and those relations from DB2.mt_sms and then insert to DB1.mo_sms and DB2.mt_sms. For example I'll get all the records from mo_sms with fecha_sms LIKE '%2013-04-19%' then if I insert those records in DB1.mo_sms new IDs will be generated then the integrity will be lost. Should I make this programatically or there is any way to do this using just SQL queries? I'm using MySQL as DBMS.

SQL Server Replication: "ALTER TABLE ALTER COLUMN" is not propagated to subscribers

Posted: 29 Jul 2013 08:44 AM PDT

We are running SQL Server 2008 R2 SP1 as publisher & distributor, and SQL Server 2005 SP3 as subscriber. The replication of schema changes is activated, and the replication has been running for years, including frequent schema changes (new column, new constraints, etc).

The following instruction was sent on the publisher:

use myDatabase  alter table Tbl_TypeLignePaye        alter column natureTypeLignePaye nvarchar(12)  go  

where field natureTypeLignePaye was originally nvarchar(3) null

The query ran without errors on the main database. The result is the following:

  1. The field natureTypeLignePaye still appears as nvarchar(3) in the object browser
  2. The column properties screen indicates a nvarchar type with a length of 12
  3. The change was not propagated to the subscribers

Any idea on what is going on with this database?

Publisher: object browser window vs property window give incoherent data

field type and length

Is it possible to have extra tables in a Slave with MySQL Replication

Posted: 29 Jul 2013 02:44 PM PDT

As my title mention I have a Master and a Slave database.

Master if for operations data and my slave mainly for reporting stuff.

The issue is that I need to create extra tables on reporting that can't be on the master, but the way my replication is set (the simplest one mentioned by the official doc) at the moment, this breaks the replication system.

How could I add tables on the Slave without Master caring about it ? Is it even possible ?

Replication issue - CREATE SELECT alternative?

Posted: 29 Jul 2013 04:44 PM PDT

I've an MySQL 5.1 slave for our BI team.

They need to make some CREATE SELECT with big select queries (several million lines).

As CREATE SELECT is a DDL, if the replication attempts to update some rows in same tables than the SELECT statement, replication is blocked until the freeing of the CREATE SELECT.

Do you now a good non-blocking alternative to thoses CREATE SELECT statements?

I thought to an SELECT INTO OUTPUT FILE then LOAD DATA INFILE but they will fill out our disks as BI guys like to do... :)

Max.

Which text-index I should create for xooops engine to achieve better search results?

Posted: 29 Jul 2013 10:44 AM PDT

In one of projects we use xoops engine to manage content. In mysql slow query log most of queries are following :

SELECT p.uid,f.forum_id, p.topic_id, p.poster_name, p.post_time, f.forum_name, p.post_id, p.subject              FROM xps33_bb_posts p,              xps33_bb_posts_text pt,                  xps33_bb_forums f WHERE p.post_id = pt.post_id AND p.approved = 1 AND p.forum_id = f.forum_id AND f.forum_id IN (1,4,61,7,9,17,20,45,35,44,38,39,43,53,54,55,56,57,58,60,14,29,40,26,18,41,33,24,32,59,25) AND ((p.subject LIKE '%rivi%' OR pt.post_text LIKE '%orvi%') AND (p.subject LIKE '%care%' OR pt.post_text LIKE '%gor%'))  ORDER BY p.post_time DESC LIMIT 0, 5;  

I can't change them as It would involve changing the engine which is not an option atm. But I can help the engine to search faster. As I understood as the table uses MyIsam engine I can create text indicies which should make search faster, am I right?

So in general for which indicies I should create to avoid following queries run for long time?

+----+-------------+-------+--------+---------------------+---------+---------+--------------------+--------+-----------------------------+  | id | select_type | table | type   | possible_keys       | key     | key_len | ref                | rows   | Extra                       |  +----+-------------+-------+--------+---------------------+---------+---------+--------------------+--------+-----------------------------+  |  1 | SIMPLE      | p     | ALL    | PRIMARY,forumid_uid | NULL    | NULL    | NULL               | 144090 | Using where; Using filesort |  |  1 | SIMPLE      | f     | eq_ref | PRIMARY             | PRIMARY | 4       | diginew.p.forum_id |      1 | Using where                 |  |  1 | SIMPLE      | pt    | eq_ref | PRIMARY             | PRIMARY | 4       | diginew.p.post_id  |      1 | Using where                 |  +----+-------------+-------+--------+---------------------+---------+---------+--------------------+--------+-----------------------------+  3 rows in set (0.00 sec)  

How to drop a DB2 instance when the instance owner was removed

Posted: 29 Jul 2013 11:44 AM PDT

This is a real sticky situation. I was handed over a machine (running an AIX 7.1), and my first task was to re-install DB2 server on it. But someone before me had conveniently removed an instance owner account, and probably recreated it. Now, the problem is this:

1) When I try to uninstall DB2, it says the instance is active and has to be dropped first.

2) When I try to drop this instance, DB2 says there is no such instance.

I am quite new to DB2 administration. Not sure how to proceed here. Any help is appreciated

Thanks

Ensure correct username when using pg_restore

Posted: 29 Jul 2013 09:44 AM PDT

I have just installed postgres 9.1.6 on a local Ubuntu server. Now I'm trying to restore a database dump from a database on Heroku. The local database is setup like this:

sudo -u postgres psql -c "create user app_user with password 'pass';"  sudo -u postgres psql -c "create database app_production owner app_user;"  

Now, when I try to restore the the dump I use the following command:

pg_restore --verbose --schema=public --no-acl --no-owner --jobs=8 --exit-on-error --username=app_user --dbname=app_production /tmp/app_production.dump  

Now in psql with \l to see ownerships I get the following:

                                         List of databases            Name    |   Owner   | Encoding |   Collate   |    Ctype    |   Access privileges    ------------------+-----------+----------+-------------+-------------+-----------------------   app_production   | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |    postgres         | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |    template0        | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +                    |           |          |             |             | postgres=CTc/postgres   template1        | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +                    |           |          |             |             | postgres=CTc/postgres  

As you can see, the ownership of app_production database has now turned from app_user to postgres. I would have expected the owner of the app_production database to still be app_user, so what am I doing wrong?

BTW, The --schema=public was added, because I was getting a weird error:

"Could not execute query: ERROR: must be owner of extension plpgsql"

Another thing is, that the owner of the dump is the user that the database was having on heroku, which would be something like 'jebf473b73bv73v749b7'

Splitting Tables in MySQL. Good practice?

Posted: 29 Jul 2013 06:25 PM PDT

I have started working on an existing project and the previous developer had split up a table into 10 separate tables with identical schemas but different data.

The tables look like this:

[tableName_0]  [tableName_1]  [tableName_2]  [tableName_3]  [tableName_4]  [tableName_5]  [tableName_6]  [tableName_7]  [tableName_8]  [tableName_9]  

The primary key is an integer id field. The application uses a hash algorithm (id mod 10) to know what table to access when doing lookups. For example id = 10 would result to [tableName_0].

Combined, the tables have probably 100,000 rows and the growth rate is relatively low.

So, my question is whether or not this is a viable solution or even if it's a good practice in any situation. My theory is to push to have them combined as it will make things easier as far as UNIONs, etc go. The main downside is changing all the application code and whether it is even worth it in the long run.

How to execute SQL against all DBs on a Server

Posted: 29 Jul 2013 11:46 AM PDT

I have some standard SQL that I run against multiple databases on a single server to help me diagnose problems:

select       so.name,      so.type,      MAX(case when sc.text like '%remote%' then '' ELSE 'N' END) AS Relevant,      @@ServerName as Server,      DB_Name() as DBName   from      sysobjects so with (nolock)      join syscomments sc with (nolock) on so.id = sc.id  where (sc.text like '%emote%')  group by so.name, so.type  order by so.type, so.name  

How can I execute this against all databases on a single server? (besides manually connecting to one at a time and executing)

No comments:

Post a Comment

Search This Blog