Thursday, August 29, 2013

[how to] Unable to DROP Database

[how to] Unable to DROP Database


Unable to DROP Database

Posted: 29 Aug 2013 09:00 PM PDT

I get the '..currently in use' message when I try to drop my database.

I've restarted my SQL Server Express service a couple of times, restarted the computer a couple of times, and even tried to kill the process using the database.

Nothing has worked so far. When I do kill xx nothing happens the first time, however when I do it again, it comes up with not an active process message, as if it had marked the process for deletion the first time I did the kill. The entry still shows up in sysprocesses table.

I want to know what is the safest way to drop my database?

Environment: sql server 2008R2 EXPRESS

SSMS is not an option, I am using sqlcmd..

How do you handle cross-validating inputs vs database information?

Posted: 29 Aug 2013 02:42 PM PDT

I have created a reporting engine and it is up and running pretty good right now. It consists of a lot of employee data and is used to many things.

An example is that we would take a report from a vendor about usage on their platform and my database would add on an extra 5-6 fields that would be used to cross-reference groups, titles, whatever. Most vendors just report out user# or email so internal managers want more info. Works great. However it is not real time. It is a manual process right now where I take the files from vendor, load them into table, then execute my query to give them their new file.

Another example: Admin runs 50 distribution lists that might have a few thousand emails on them. She sends me her lists to be cross-validated against the current db. I have queries to do this automatically. But again same process as before. I know I can schedule the query to fire off whenever but don't want to bog down server which is hosting about 10 internal sites.

Two issues here: 1. How do I deal with the cross-validation? Should this be handled via a web applet? What is the logic that I need in place? 2. How do you validate reports (csv xls) that are imported to a table?

Collation changes after some time

Posted: 29 Aug 2013 02:28 PM PDT

I have a table which is being targeted by outside script to write stuff into. Default collation if utf8_general_ci but to connect it with foreign key to another table I need to convert it to utf8_unicode_ci. Every time I change it to unicode on and select "Convert Data" (in HeidiSQL) it looks good in the code and alters it properly. Strangely, few minutes after id check and it would be back to general utf.

Upgrading MySQL 5.1 to 5.5 on Centos 6 without removing dependencies

Posted: 29 Aug 2013 06:59 PM PDT

I'm trying to upgrade MySQL 5.1 to 5.5 using yum/rpm combo (yum to remove MySQL 5.1 and rpm to install MySQL 5.5). Most instructions I've found suggest to do first:

yum remove mysql mysql-server mysql-libs  

However doing so will remove in my case lots of other packages which I definitely don't want to remove:

================================================================================   Package              Arch        Version                   Repository     Size  ================================================================================  Removing:   mysql                x86_64      5.1.69-1.el6_4            @updates      2.4 M   mysql-libs           x86_64      5.1.69-1.el6_4            @updates      4.0 M   mysql-server         x86_64      5.1.69-1.el6_4            @updates       25 M  Removing for dependencies:   cronie               x86_64      1.4.4-7.el6               @cr           166 k   cronie-anacron       x86_64      1.4.4-7.el6               @cr            43 k   crontabs             noarch      1.10-33.el6               @cr           2.4 k   mysql-devel          x86_64      5.1.69-1.el6_4            @updates      388 k   percona-toolkit      noarch      2.2.4-1                   @percona      5.7 M   perl-DBD-MySQL       x86_64      4.013-3.el6               @base         338 k   php-mysql            x86_64      5.3.3-23.el6_4            @updates      216 k   postfix              x86_64      2:2.6.6-2.2.el6_1         @base         9.7 M   sysstat              x86_64      9.0.4-20.el6              @base         807 k    Transaction Summary  ================================================================================  

How can I upgrade MySQL without removing those packages? Can I somehow remove MySQL 5.1 without dependencies (well, maybe besides mysql-devel) and get away with it? (without libraries mismatch). I've found one blog that suggest to upgrade MySQL in yum shell, however I'm not sure if it would change anything? :

How do I get security information out of an MKF database? (FileNet Security Database)

Posted: 29 Aug 2013 01:40 PM PDT

I'm trying to get the security information for all the documents in our FileNet system. It is stored in a MKF database in hexidecimal format. There's a decode command that will display the group names that have read, write and execute on the document, but I can't figure out how to get that data out of the system. Does anyone here have any FileNet conversion experience?

Target database memory exceed available shared memory

Posted: 29 Aug 2013 03:09 PM PDT

I am trying to install Oracle 12c on a RHEL 6.4 system with 64GB of memory. When I try to enable automatic memory to 48GB, I get the error:

What do I need to change to allow Oracle to use more than 32189MB?

[INS - 35172] Target database memory (49897MB) exceeds available shared memory (32189MB) on the system.

/etc/sysctl.conf contains the following so I'm not sure why it won't allow me to use more than 32G of shared memory

# Controls the maximum shared segment size, in bytes  kernel.shmmax =  68719476736  # Controls the maximum shared segment size, in bytes  #kernel.shmmax = 68719476736    # Controls the maximum number of shared memory segments, in pages        kernel.shmall = 16777216  #kernel.shmall = 1073741824  fs.file-max = 6815744  kernel.msgmni = 2878  kernel.sem = 250 32000 100 142  kernel.shmmni = 4096  net.core.rmem_default = 262144  net.core.rmem_max = 4194304  net.core.wmem_default = 262144  net.core.wmem_max = 1048576  fs.aio-max-nr = 3145728  net.ipv4.ip_local_port_range = 9000 65500  vm.min_free_kbytes = 51200  fs.file-max = 6815744  

limits.conf contains the following:

oracle          soft    nproc           2047  oracle          hard    nproc           32768  oracle          soft    nofile          32768  oracle          hard    nofile          65536  

Transaction log autogrowth duration

Posted: 29 Aug 2013 12:09 PM PDT

I am using event notifications to capture data and log file autogrowth events for all databases on my servers. I'm using the data to for analysis of database storage configuration.

In looking at the data I've noticed that the average duration for transaction log growth is well above anything I would expect which leads me to think that I'm either misinterpreting the data or overlooking something related to how transaction log autogrowth works.

This is an example of a log file growth event that was captured today:

<EVENT_INSTANCE>    <EventType>LOG_FILE_AUTO_GROW</EventType>    <PostTime>2013-08-29T11:14:26.447</PostTime>    <SPID>97</SPID>    <DatabaseID>32</DatabaseID>    <NTDomainName />    <HostName>[cleansed]</HostName>    <ClientProcessID>4884</ClientProcessID>    <ApplicationName>.Net SqlClient Data Provider</ApplicationName>    <LoginName>[cleansed]</LoginName>    <Duration>4173000</Duration>    <StartTime>2013-08-29T11:14:22.263</StartTime>    <EndTime>2013-08-29T11:14:26.437</EndTime>    <IntegerData>64000</IntegerData>    <ServerName>[cleansed]</ServerName>    <DatabaseName>MyDB</DatabaseName>    <FileName>MyDB_log</FileName>    <LoginSid>[cleansed]</LoginSid>    <EventSequence>14637017</EventSequence>    <IsSystem />    <SessionLoginName>[cleansed]</SessionLoginName>  </EVENT_INSTANCE>  

Since duration is reported in miliseconds I'm reading this as it taking 69.54 minutes to grow the file. Autogrowth for this log file is set to 512MB (limited to 2TB)

SELECT      growth AS GrowthPages,      growth*8/1000 AS GrowthMB,      max_size,      is_percent_growth  FROM MyDB.sys.database_files  WHERE type=1    GrowthPages GrowthMB        max_size        is_percent_growth  64000       512             268435456       0  

All of the databases log to the same volume which is connected to a SAN via fiberchannel. (I'd have to get with our SAN admin for more details on the storage config if needed).

The instance is SQL 2012 Enterprise, server is Windows 2008 R2 Enterprise.

Why would it take over an hour to grow the log by 512MB? We're not noticing a delay in operations on any of these databases (unless we're just overlooking it). There are a handful of other databases with similar duration; their autogrowth settings are the same. Other databases with smaller autogrowth settings have proportionally smaller durations.

Getting the most out of SQL Server with a million rows a day

Posted: 29 Aug 2013 02:06 PM PDT

I have a windows service that is inserting 1000 rows at 1 minute intervals to a SQL Server 2008 Standard database table that looks like:

id, datetime, key_id, value

key_id is a foreign key to another table that looks like id, name

I'm building a localhost website reporting interface in PHP. I will be doing queries from this interface like:

  • Last 7 days of data for key_id's 1,2,50,377 at 1 minute intervals
  • Last 30 days of data for key_id's 40,47,732,400,43,22,18,5,14 at 1 hour intervals
  • 1 hour intervals for key_id 7,8,20,40 for all of May 2009

And I want it to be as fast as possible.

How should I tune SQL Server to be performant in this case?

SSIS Connection String Expression at Run Time and during Execution

Posted: 29 Aug 2013 07:50 PM PDT

I have an Excel connection manager which should write to a file which does not exist prior to runtime but rather is created during execution. I have an expression set for the connection string property of the excel connection manager which points to the file that will be created. The problem I am having is that since the file does not yet exist (prior to execution), when I go to run the package it is giving me an error since it can not find the file. If i create a dummy file and place it where the excel connec manager is pointing the package will run ok.

What I have tried to do is point the excel connection manager to an existing file prior to run time since it will not give an error and then hopping it will then pick up the new Path (excel connection string) from the Expressions but what is happening is that since I have an expressions set for the connection string property it will not let me overwrite the connection string property, it keeps defaulting to the value of the expressions which contains other variable that are populated during run time so that is where I am getting my error. The expressions string prior to runtime is not pointing to an actual file because it does not yet exist but mainly because the expressions is not fully populated until runtime since it also contains variables.

As of right now it seems like I might have to just point to a dummy template file (without using any expressions) prior to runtime and then maybe use a Script to change the connection string property of the excel connec manager to = a variable containing an expression containing the path to the file....

Does this sound like the way to go or am I over complicating things?

Deadlock from mysqldump --single-transaction, is it possible?

Posted: 29 Aug 2013 02:39 PM PDT

Is it possible to get a deadlock if you ran mysqldump with the argument --single-transaction ? or any other complication ?

If it is used to backup a live site with many other querys running at the same time.

Extra info:

The mysqldump process could take over few minutes to complete.

The tables are innodb.

EDIT

I am concerned about the deadlocks errors that could happen on the live applications relying on the database during the time where mysqldump is running.

Why would running a SQL query overnight break my database for the following day?

Posted: 29 Aug 2013 12:23 PM PDT

I apologise in advance if this is a duplicate question (which I bet it will be). I had a good search through and found similar questions but nothing that seemed an exact match.

I wrote a report that takes around 30 minutes to run. It does a lot of number crunching and I am quite happy that it takes so long to execute but this means the report can't be interactive. Basically the report has to calculate a profit and loss report as if the company has lost exactly one client. Then it puts that client back and runs the report again as if the next client was lost, etc. I think 30 minutes is actually rather good performance considering how intensive this is.

Users are happy with this report being updated nightly and then they can see the cached results the following day. So the plan was to run the report overnight and save the output somewhere.

I wrote a stored procedure to do this and I can run it during the day in around 30 minutes. However, when the report runs overnight as a SQL Agent job it NEVER completes. It kicks off at 9pm and when I get into work the next day I will find my inbox flooded with emails saying that SQL Server is broken, isn't performing properly, etc. I kill the job and everything goes back to normal.

Except it doesn't go back to normal. What happens next is that ANY report that is run against the same database will timeout. The only way to correct this is to either wait a day (without running the overnight report) or to run another query against the same database directly from SSMS as this seems to reset the problem for some reason.

Obvious questions:

  • is it the SQL Agent job causing the problem (maybe permissions)? No, if I run the SQL Agent Job during the day it works fine, it's some sort of timing issue I think.
  • have you tried moving the schedule? I have tried everything from running it at as early as 6pm to running it at as late as 6am and it doesn't seem to make any difference.
  • what else runs overnight? There are a whole load of other jobs that run for different databases. I have moved the schedule around so my job runs in isolation (in theory). There is a job that runs an index optimisation across ALL databases but I can't turn this job off as it is managed externally.
  • Does the query plan change for other reports? I have no way to tell, if I run a report via SSMS then it works, if I run it via the Excel add-in that does exactly the same thing, but via a SQL connection then it times out. I can view the process in Activity Monitor and watch it run for ages before it eventually dies.

My personal opinion is that the query engine is making bad decisions about which index to use but when I run a report interactively from SSMS this somehow resets "something" internally and it goes back to running reports using the right indexes again. I have no evidence to back this up with apart from the behaviour is the same if I disable one of the indexes on one of the "raw data" tables.

When I say, "run a report interactively" I just mean this:

SELECT * FROM FormatMonthEndReport(2013, 1);  

Where FormatMonthEndReport is a table-valued function.

I don't know if this is even relevant but I make a lot of use of APPLY in all of these reports.

I imagine more detail will be asked for but I don't want to add too much detail at this stage as this would rapidly turn into a wall of text. I will come back and see what is asked for first so my detail is more targetted.

Edit - it looks like running a SQL Profile is going to be the first step. I am going to set this up to run overnight and then report back tomorrow. I need to leave so I won't have time to respond to the other suggestions until then... but I am not ignoring you!

Slow query and Insert with trigger

Posted: 29 Aug 2013 01:03 PM PDT

I have vehicle tracking system application based on PHP, SQL Server 2008 R2 Enterprise Edition with XEON HP Z800 Server. My database is growing day by day in millions and it makes it very slow to select and insert, I have one main table name TraceData that contain all records for each signal from devices every second more records coming in this table, but when we trying to generate History or report for any vehicle it got stuck and take very long some times more then 5 mins the structure of my table is mentioned below:

CREATE TABLE [dbo].[TraceData](      [AutoID] [bigint] IDENTITY(1,1) NOT NULL,      [IMEI] [varchar](20) NOT NULL,      [Alram] [varchar](20) NULL,      [GPRMCState] [varchar](10) NULL,      [Lat] [decimal](18, 6) NULL,      [NS] [varchar](10) NULL,      [Lng] [decimal](18, 6) NULL,      [WE] [varchar](10) NULL,      [Speed] [decimal](18, 4) NULL,      [Direction] [decimal](18, 1) NULL,      [PDOP] [decimal](18, 1) NULL,      [HDOP] [decimal](18, 1) NULL,      [VDOP] [decimal](18, 1) NULL,      [RTC] [datetime] NULL,      [VCS] [varchar](10) NULL,      [VBV] [decimal](18, 2) NULL,      [VCV] [decimal](18, 2) NULL,      [ADA] [decimal](18, 2) NULL,      [ADB] [decimal](18, 2) NULL,      [LACCIL] [varchar](10) NULL,      [LACCIC] [varchar](10) NULL,      [Temperature] [decimal](18, 1) NULL,      [Mileage] [decimal](18, 4) NULL,      [ModelType] [nvarchar](50) NULL,      [Serial] [varchar](10) NULL,      [TrackerTime] [datetime] NULL,      [ServerTime] [datetime] NULL,      [Address] [nvarchar](2000) NULL,      [IO1] [varchar](10) NULL,      [IO2] [varchar](10) NULL,      [IO3] [varchar](10) NULL,      [IO4] [varchar](10) NULL,      [IO5] [varchar](10) NULL,      [IO6] [varchar](10) NULL,      [IO7] [varchar](10) NULL,      [IO8] [varchar](10) NULL,      [IO9] [varchar](10) NULL,      [IO10] [varchar](10) NULL,      [IO11] [varchar](10) NULL,      [IO12] [varchar](10) NULL,      [Ignition] [int] NULL,      [Multiple] [int] NULL,      [First] [int] NULL,      [MultipleSession] [bigint] NULL,   CONSTRAINT [PK_TraceData] PRIMARY KEY CLUSTERED   (      [AutoID] ASC  )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]  ) ON [PRIMARY]  GO    SET ANSI_PADDING OFF  GO    ALTER TABLE [dbo].[TraceData] ADD  CONSTRAINT [DF_TraceData_Engine]  DEFAULT ((0)) FOR [Ignition]  GO    ALTER TABLE [dbo].[TraceData] ADD  CONSTRAINT [DF_TraceData_Multiple]  DEFAULT ((0)) FOR [Multiple]  GO    ALTER TABLE [dbo].[TraceData] ADD  CONSTRAINT [DF_TraceData_First]  DEFAULT ((0)) FOR [First]  GO    ALTER TABLE [dbo].[TraceData] ADD  CONSTRAINT [DF_TraceData_MultipleSession]  DEFAULT ((0)) FOR [MultipleSession]  GO  

I have one trigger on it also that's

ALTER TRIGGER [dbo].[update_trackers_table]      ON  [dbo].[TraceData]      AFTER INSERT  AS     if EXISTS  (SELECT * FROM inserted WHERE IO5 = '1' OR (IO3 ='1' AND ModelType = 'GT08' ))    BEGIN      -- SET NOCOUNT ON added to prevent extra result sets from      -- interfering with SELECT statements.    -- Set Replication    insert into  TraceDataApp (IMEI,Alram,GPRMCState,Lat,NS,Lng,WE,Speed,Direction,Mileage,ModelType,Serial,TrackerTime,ServerTime,IO1,   IO2,IO3,IO4,IO5,IO6,IO7,IO8,IO9,IO10,IO11,IO12,Ignition) SELECT IMEI,Alram,GPRMCState,Lat,NS,Lng,WE,Speed,Direction,Mileage,ModelType,Serial,TrackerTime,ServerTime,IO1,   IO2,IO3,IO4,IO5,IO6,IO7,IO8,IO9,IO10,IO11,IO12,Ignition FROM inserted;  -- End Set Replication     UPDATE trackers   SET LastAlarm = inserted.Alram,  LastLangtitute=inserted.Lng,  LastLatitute=inserted.Lat,  LastTime =DATEADD(HOUR,4,inserted.TrackerTime),  LastSpeed= (inserted.Speed * 1.852),  TMileage= (trackers.TMileage + inserted.Mileage),  LastDirection=inserted.Direction ,  IGN='On',  LastTraceID=inserted.AutoID    FROM trackers,Inserted   WHERE trackers.IMEI = inserted.IMEI;    END  else  BEGIN     -- Set Replication    insert into  TraceDataApp (IMEI,Alram,GPRMCState,Lat,NS,Lng,WE,Speed,Direction,Mileage,ModelType,Serial,TrackerTime,ServerTime,IO1,   IO2,IO3,IO4,IO5,IO6,IO7,IO8,IO9,IO10,IO11,IO12,Ignition) SELECT IMEI,Alram,GPRMCState,Lat,NS,Lng,WE,Speed,Direction,Mileage,ModelType,Serial,TrackerTime,ServerTime,IO1,   IO2,IO3,IO4,IO5,IO6,IO7,IO8,IO9,IO10,IO11,IO12,Ignition FROM inserted;  -- End Set Replication      --Set Replication   --insert into  TraceDataApp (AutoID,IMEI,Alram,GPRMCState,Lat,NS,Lng,WE,Speed,Direction,Mileage,ModelType,Serial,TrackerTime,ServerTime,Address,IO1,  -- IO2,IO3,IO4,IO5,IO6,IO7,IO8,IO9,IO10,IO11,IO12,Ignition) SELECT AutoID,IMEI,Alram,GPRMCState,Lat,NS,Lng,WE,Speed,Direction,Mileage,ModelType,Serial,TrackerTime,ServerTime,Address,IO1,  -- IO2,IO3,IO4,IO5,IO6,IO7,IO8,IO9,IO10,IO11,IO12,Ignition FROM inserted;  -- End Set Replication     UPDATE trackers   SET LastAlarm = inserted.Alram,  LastLangtitute=inserted.Lng,  LastLatitute=inserted.Lat,  LastTime =DATEADD(HOUR,4,inserted.TrackerTime),  LastSpeed= (inserted.Speed * 1.852),  TMileage= (trackers.TMileage + inserted.Mileage),  LastDirection=inserted.Direction ,  IGN='Off',  LastTraceID=inserted.AutoID    FROM trackers,Inserted   WHERE trackers.IMEI = inserted.IMEI;    END  

any help will be really appreciated.

Replicating from master to slaves and from slaves to master

Posted: 29 Aug 2013 04:27 PM PDT

I have a master and multiple slave machines. All SQL operations on the master node should be sent to the slave nodes. But the tables on the slave nodes may also be written to locally. These local changes on the slave node should be reflected to the master node.

For example let's say I have a master machine, lets call it "M", and two slave machines lets call them "A" and "B"

All these machines have a table named test_table with a column named "id".

I insert data to the test_table on M machine.

(M)test_table     |id|     |1 |    (A)test_table     |id|    (B)test_table     |id|  

now this change is reflected to the slaves:

(M)test_table     |id|     |1 |    (A)test_table     |id|     |1 |    (B)test_table     |id|     |1 |  

Now on slave A, I make a local change.

(M)test_table     |id|     |1 |    (A)test_table     |id|     |2 |    (B)test_table     |id|     |1 |  

Now this change is reflected to the master server:

(M)test_table     |id|     |2 |    (A)test_table     |id|     |2 |    (B)test_table     |id|     |1 |  

And then master server replicates this change to the slaves:

(M)test_table     |id|     |2 |    (A)test_table     |id|     |2 |    (B)test_table     |id|     |2 |  

What is this kind of replication named? And how can I achieve this on postgresql?

how to create an incremental database copy in postgresql?

Posted: 29 Aug 2013 04:18 PM PDT

Virtual machine software like VirtualBox allow one to make incremental VM clones. That is data, once "touched" (opened writable), will be copied and stored in the incremental cache of the new clone.

I am searching for the same option for a Postgres DBMS.

How can I setup an incremental database copy, where entries are read from the original database, and touched/modified rows from the local copy?

If not on the DBMS level, how can I emulate such behavior at the file-system/storage level using a separate DBMS instance?

Background: The idea is to utilize the powerful database-server yet without incurring much resource overhead for a staged/developer database-copy.

Feel free to edit the subject or post to improve clarity.
Thanks.

.bak file not visible in any directory

Posted: 29 Aug 2013 04:17 PM PDT

I have a .bak file created today by someone else, manually created through SSMS 2008 R2. I'm trying to manually restore the database, unfortunately the file isn't appearing when I go to browse it.

I can script the restore process, but I've seen this problem before and I'm not sure what could cause the .bak to not appear.

Minimizing Page Fetches

Posted: 29 Aug 2013 02:17 PM PDT

I have a complicated database structure and am trying to use it to retrieve Records based on multiple selection criteria from several tables. As a general rule, is it better to attempt to use correlated sub-queries to check the state of flags in other tables to determine Eligibility, or am I better of creating Views that utilize Joins that represent the valid Records? As a more concrete example, is this:

Select     Col1, Col2, Col3  From    Table1  Where    (Select RefCol From Table2 Where Table2.PK = Table1.FK) = "Condition"  

Superior or Inferior to something like this:

Select    T1.Col1, T1.Col2, T1.Col3  From    T1  Inner Join    T2      On T1.FK = T2.PK  Where    T2.RefCol = "Condition"  

-- Edited --

As a corollary question: Is it productive to create Views which contain Intermediary validations? IE, if I repeatedly need to check if T1.Col1 = 1 and T1.Col2 = 0 is it worthwhile to create the following view:

Create View T1Validated As  Select    Col1, Col2  From    T1  Where    Col1 = 1  And    Col2 = 0  

And then later use the existence of a record in T1Validated in subsequent checks, or is that likely to produce additional database page retrievals and/or table scans?

Query to get reposts from people the user is following

Posted: 29 Aug 2013 01:17 PM PDT

I have a table posts where all the posts by users are stored, the structure of this table is as follows

| post_id | post_user | post_content | post_date |

the users table is as follows

| user_id | username | user_joined |

user relationship table is as follows

| follower | following | rel_date |

this is the query I am using to get the posts from people that user is following to show them.

SELECT * FROM posts WHERE post_user in(SELECT follower from user_follow where following=$loggedin_user)

Now I want users to share posts, for which I created a table repost_user as follows

| repost_user | original_post_user | post_id | repost_date |

I want to get posts from people that user following, which includes reposts too.. How do I do this?

EDIT : How my resultset should look

post_id | post_content | post_user | post_date | is_repost | repost_user | repost_date

for eg if its normal post the row should look like

23 | <some content> | 3 | <post date> | false | null | null |

if its a repost the row would be

23 | <some content> | 3 | <post date> | true | 2 | <repost_date> |

Need to install Oracle Express 11g Release 2 on a Windows 7 64-bit laptop

Posted: 29 Aug 2013 07:17 PM PDT

I need the Oracle 11g Release 2 sample schemas (HR, OE, etc.) in order to do most of the available online tutorials. I was hoping to install Oracle Express Edition on my Windows 7 laptop to get these; but I have never heard of anybody successfully installing Oracle XE on a 64-bit Windows platform.

Is there a version of Oracle XE 11g R2 available for Windows 7? And if so, could you please point me to it?

Thanks...

Help my database isn't performing fast enough! 100M Merge with 6M need < 1 hour!

Posted: 29 Aug 2013 06:17 PM PDT

I have a server right now receiving more raw data files in 1 hour then I can upsert (insert -> merge) in an hour.

I have a table with 100M (rounded up) rows. Table is currently MyISAM. The table has 1000 columns mostly boolean and a few varchar.

Currently the fastest way i've found to get the information into my DB until now was:

Process raw data into CSV files. Load Data In File to rawData Table. Insert rawData table into Table1. (on dupe key do my function) Truncate rawData Repeat. Worked fine until im merging 6M+ Rows into 100M rows and expecting it to take under an hour.

I got 16G of ram so I set my Key_Buffer_Pool to 6G. I have my query cache pool to 16M I have my query cache limit to 10M I would just replace the information however it has to be an Upsert, Update the fields that are true if exists and insert if it does not.

Things im looking into atm; - Possibly switching server table to InnoDB? |-> Not sure about the performance, as the insert into an empty table is fine, its the merge that's slow.

Maybe allowing more table cache? Or even Query Cache? mysql sql mysqli innodb myisam

Merge Code:

b.3_InMarket = (b.3_InMarket OR r.3_InMarket),

To compare my 2 bool columns.

Update

  • Ok I set Raid0
  • Changed my query to Lock Write on tables when inserting
  • When importing csv im disabling keys then re-enabling them before upsert.
  • Changed concurrent_insert to 2

USER_ID field in alert logs (also in V$DIAG_ALERT_EXT view)

Posted: 29 Aug 2013 12:17 PM PDT

Does anyone know what triggers the USER_ID field in the log.xml to be populated? The value also exists in the V$DIAG_ALERT_EXT view.

I've found by observing the logs that if a temp tablespace fills up, it will log the USER_ID of the problematic SQL statement causing the issue. But other than that, it appears that value is always NULL.

Idle connection plus schema-modifying query causing locked database

Posted: 29 Aug 2013 05:51 PM PDT

As part of our automated deployment process for a web app running on a LAMP stack, we drop all our triggers and stored procedures and recreate them from source control. It turns out there was a hidden danger to this approach that we hadn't thought about.

A few days ago we managed to end up with the database for (the staging version of) our web app stuck in a horribly hung state after the following sequence of events:

  1. I connect to the remote database from our office (via Python's MySQLdb, as it happens) and run a few SELECT queries on the Foo table.
  2. I leave the connection open, because I'm lazy.
  3. My boss commits some changes on his laptop, pushes to the remote repo on the web server, and goes to lunch without looking at the output
  4. The deployment hook on the web server tries to update the triggers and stored procedures in the database, but isn't able to even DROP the first trigger because the trigger involves the Foo table, which my currently sleeping connection had previously done some SELECTs from.
  5. Now nobody can SELECT from the Foo table at all, because the connection trying to DROP the trigger has already taken out a lock on the Foo table that prevents any other connections from accessing the Foo table in any way - even though it's still waiting for the sleeping connection to be closed before it can actually do anything.
  6. Crucial business processes relying upon the Foo table grind to a halt, alarms sound, and our web app stops serving customers. My boss flies into a rage and declares that heads will roll if the cause of the problem is not found and fixed so that this can never happen again. (Just kidding, it was only our staging server and my boss is very friendly.)

What's interesting is that this scenario wasn't caused by any kind of deadlock; it was caused by a sleeping connection implicitly holding some kind of lock that prevented the DROP TRIGGER statement from executing, just by virtue of having done a SELECT on the same table previously. None of the anti-deadlock features of MySQL could automatically kill a process and save the situation, because ultimately everything could continue as soon as my original process - the idle one that had only ever done SELECTs - was killed. The fact that MySQL locks behave this way by default seems perverse to me, but that's not the point. I'm trying to figure out a way to ensure that the disaster scenario described above can't ever recur (especially on our live server). How would you suggest I do this?

We've talked the problem over in the office, and there are a couple of hypothetical solutions we saw:

  • Change some config setting somewhere so that sleeping processes time out after 10 seconds by default, so that a sleeping process can never sit on locks. Better yet, have them just release all locks after 10 seconds so that I can still go to lunch and leave my MySQL shell open, or my Python window open with a MySQLdb connection active, then come back and use it, without fear of breaking anything.

    • This might be really irritating when trying to run queries manually, especially ones that require grouping into a transaction.
  • Work some magic on the queries that try to replace the triggers and stored procedures so that the acquisition of locks required for the relevant DROPs and CREATEs is made into an atomic operation - something like, if the query can't acquire all the locks it needs immediately in sequence, then it releases them and tries again periodically until it works.

    • This might just make our deployment process never complete, though, if the database is too busy for it to be able to grab all the locks in one go.
  • Drastically reduce the frequency of schema-modifying queries we make (it only seems to be these that can be blocked from starting by a connection that's only done SELECTs), for instance by having our deployment script check whether a stored procedure or trigger in source control has changed from the version in the database before DROPping and reCREATEing the one on the database.

    • This only mitigates the problem, it doesn't actually eliminate it.

We're not sure if either of the first two solutions we considered are even possible in MySQL, though, or if we're missing a better solution (we're developers, not DBAs, and this is outside of our comfort zone). What would you recommend?

Bitmask Flags with Lookup Tables Clarification

Posted: 29 Aug 2013 08:17 PM PDT

I've received a dataset from an outside source which contains several bitmask fields as varchars. They come in length as low as 3 and as long as 21 values long. I need to be able to run SELECT queries based on these fields using AND or OR logic.

Using a calculated field, where I just convert the bits into an integer value, I can easily find rows that match an AND query, by using a simple WHERE rowvalue = requestvalue, but the OR logic would require using bitwise & in order to find matching records.

Given that I would need to work with several of these columns and select from hundreds of millions of records, I feel that there would be a huge performance hit when doing bitwise & operations to filter my SELECT results.

I came across this answer from searching and it looked like it may fit my needs, but I need some clarification on how it is implemented.

Is this as simple as creating a lookup table that has all possible search conditions?

Example for 3 bits using (a & b) (Edit: Wrong bitwise op)

001,001  001,011  001,101  001,111  010,010  010,011  010,110  011,011  011,111  etc  

The author mentions that it's counter-intuitive initially, but I can't help but feel I'm interpreting the solution incorrectly, as this would give me a single lookup table with likely billions of rows.

Any clarifications on the answer I linked above or other suggestions that would preserve the existing database are appreciated.

Edit: A more concrete example using small data.

Four flags, HasHouse,HasCar,HasCat,HasDog, 0000 is has none, 1111 is has all.

Any number of flags, from all to none, can be flipped, and results must be filtered where selection matches all (Using exact value comparison) or at least 1 (Using bitwise &).

Adding a single calculated column for each bitmask is ok, but adding a column for each bit for more than 100 bits, coupled with how to insert/update the data is why I'm trying to find alternative solutions.

SQL Server 2012 catalog.executions to sysjobhistory - any way to join them?

Posted: 29 Aug 2013 03:17 PM PDT

I have exhausted my resources and can't find a foolproof way to join the ssisdb.catalog tables to the jobs that run them. Trying to write some custom sprocs to monitor my execution times and rows written from the catalog tables, and it would be greatly beneficial to be able to tie them together with the calling job.

SQLite writing a query where you select only rows nearest to the hour

Posted: 29 Aug 2013 05:17 PM PDT

I've got a set of data where data has been taken approximately every minute for about three month and the time has been stored as a unix timestamp. There is no regularity to the timestamp (i.e. the zero minute of the hour may not contain a reading, 00:59:55 and the next measurement could be 01:01:01) and days may be missing.

What I need is the row nearest to the hour, with the timestep rounding to the hour, as long as the nearest value is not more than 30 minutes away from the hour.

Where a matching hour could not be found it would be helpful if the query could include a time but no value.

I realise I'm asking a lot, but this would be incredibly helpful Thanks for taking the time to read this. James

BTW, The table is just PK (autoincrement),timestamp,value, sensor id(FK). I've tried this to get the data out:

SELECT strftime('%S',time, 'unixepoch'),strftime('%M',time, 'unixepoch'),strftime('%H',time, 'unixepoch'), strftime('%d',time, 'unixepoch'), strftime('%m',time, 'unixepoch'), strftime('%Y',time, 'unixepoch'), value from Timestream where idSensor=359;  

Listing the existing SQL Server Logins and Users

Posted: 29 Aug 2013 01:37 PM PDT

I know we can check the logins and the users that are defined, using GUI in SQL Server, but am wondering how we can do this check using script.

I ran the query below but it shows Principal_id which I'm not sure how to map to get the permission level.

SELECT * FROM Sys.login_token   

So is there any built-in stored proc that can list the logins and the users with their permission level?

Thank you.

slow load speed of data from mysqldump

Posted: 29 Aug 2013 11:44 AM PDT

I've got a moderate size MySQL database with about 30 tables, some of which are 10 million records, some 100 million. The mysqldump of all the tables (into separate files) is fairly fast, takes maybe 20 minutes. It generates about 15GB of data. The largest dumped files are in the 2GB range.

When I load the data into MySQL on another box, a six-core, 8GB machine, it takes forever. Easily 12 clock hours or more.

I'm just running the mysql client to load the file, i.e.

mysql database < footable.sql  

directly with the file directly out of mysqldump

mysqldump database foo > footable.sql  

Clearly I am doing something wrong. Where do I start so it can finish in a reasonable time?

I'm not using any switches on either the dump or the load.

Add Oracle Label Security to an existing Oracle installation

Posted: 29 Aug 2013 01:45 PM PDT

We are using Amazon EC2 to host some Oracle database instances while we are evaluating and prototyping some software. We are using the AMIs provided by Oracle - specifically, Oracle Database 11g Release 2 (11.2.0.1) Enterprise Edition - 64 Bit (Linux).

I now need to do some work based on Oracle Label Security - OLS. It appears that when Oracle was first installed into an AMI, the Oracle Label Security option was not enabled. So I want to add OLS to an existing installation of Oracle.

I have tried following the installation instructions in the Oracle Label Security Administrator's Guide, using media downloaded from following the download links on the Oracle home page. However the instructions don't match up with what I see when I run the installer - according to the instructions, on the second page of the wizard I should see an Advanced Installation option, but I don't see that - I can only see three radio buttons, "Create and configure a database", "Install database software only" and "Upgrade an existing database". I tried to muddle through the instructions but there were several inconsistencies, as if I was running a different version of the installer to the documentation writer. The AMI, documentation and media are all Oracle Database 11g Release 2 (11.2.0.1) so I don't know why they don't all seem to match.

I tried to proceed with an installation anyway, and did find the checkbox to install Oracle Label Security. However it would not accept my existing ORACLE_HOME as an installation destination, telling me: [INS-32025] The chosen installation conflicts with software already installed in the given Oracle home.

So, in short - is it possible for me to add the OLS option to an existing installation? If so - how? If not - how do I make a new installation and attach the existing database to the new installation?

Edit 2011-05-23

After doing some more poking around, I found a copy of the Oracle installer at $ORACLE_HOME/oui/bin/runInstaller. This installer looks visually different and has different page flows. First thing it wanted to know was the source location, so I gave it the path to stage/products.xml on the installation media. It then gave me a choice of Enterprise, Standard or Custom - I selected Custom. It then asked for the Oracle home - this was pre-selected with the value of $ORACLE_HOME. So far so good.

Then, the first odd thing: it gave me a long list of installable products. Oracle Label Security was in this list, but it was already marked as "Installed". However if I clicked the Installed Products... button for information, Oracle Label Security was not present.

I selected Oracle Label Security anyway and it changed to say "reinstall". But on clicking Next I got this error: error message

How can I track database dependencies?

Posted: 29 Aug 2013 04:32 PM PDT

As internal applications evolve over a number of years, you occasionally find there are a number of tables that people believe are no longer relevant and want to cull. What are the practical methods for identifying database dependencies, both within the SQL environment, and maybe onward into things like SSIS?

I've worked places where fairly brutal options have been taken such as:

  • Drop first, ask questions later (can kill a data warehouse build if it tries to extract a table that no longer exists)
  • Remove permissions first, and wait for the errors to be reported (can cause silent bugs, if the failiure isn't handled correctly)

I appreciate that SQL Server comes with tools for tracking dependencies within that instance, but these seem to struggle if you have databases on different instances. Are there options that make it easier to query dependencies, maybe answering questions like "Where is this column used?" with answers like "Over on this other server in this stored procedure" or "Over in this SSIS package"?

No comments:

Post a Comment

Search This Blog