[how to] Unable to DROP Database |
- Unable to DROP Database
- How do you handle cross-validating inputs vs database information?
- Collation changes after some time
- Upgrading MySQL 5.1 to 5.5 on Centos 6 without removing dependencies
- How do I get security information out of an MKF database? (FileNet Security Database)
- Target database memory exceed available shared memory
- Transaction log autogrowth duration
- Getting the most out of SQL Server with a million rows a day
- SSIS Connection String Expression at Run Time and during Execution
- Deadlock from mysqldump --single-transaction, is it possible?
- Why would running a SQL query overnight break my database for the following day?
- Slow query and Insert with trigger
- Replicating from master to slaves and from slaves to master
- how to create an incremental database copy in postgresql?
- .bak file not visible in any directory
- Minimizing Page Fetches
- Query to get reposts from people the user is following
- Need to install Oracle Express 11g Release 2 on a Windows 7 64-bit laptop
- Help my database isn't performing fast enough! 100M Merge with 6M need < 1 hour!
- USER_ID field in alert logs (also in V$DIAG_ALERT_EXT view)
- Idle connection plus schema-modifying query causing locked database
- Bitmask Flags with Lookup Tables Clarification
- SQL Server 2012 catalog.executions to sysjobhistory - any way to join them?
- SQLite writing a query where you select only rows nearest to the hour
- Listing the existing SQL Server Logins and Users
- slow load speed of data from mysqldump
- Add Oracle Label Security to an existing Oracle installation
- How can I track database dependencies?
Posted: 29 Aug 2013 09:00 PM PDT I get the '..currently in use' message when I try to drop my database. I've restarted my SQL Server Express service a couple of times, restarted the computer a couple of times, and even tried to kill the process using the database. Nothing has worked so far. When I do I want to know what is the safest way to drop my database? Environment: sql server 2008R2 EXPRESS SSMS is not an option, I am using sqlcmd.. |
How do you handle cross-validating inputs vs database information? Posted: 29 Aug 2013 02:42 PM PDT I have created a reporting engine and it is up and running pretty good right now. It consists of a lot of employee data and is used to many things. An example is that we would take a report from a vendor about usage on their platform and my database would add on an extra 5-6 fields that would be used to cross-reference groups, titles, whatever. Most vendors just report out user# or email so internal managers want more info. Works great. However it is not real time. It is a manual process right now where I take the files from vendor, load them into table, then execute my query to give them their new file. Another example: Admin runs 50 distribution lists that might have a few thousand emails on them. She sends me her lists to be cross-validated against the current db. I have queries to do this automatically. But again same process as before. I know I can schedule the query to fire off whenever but don't want to bog down server which is hosting about 10 internal sites. Two issues here: 1. How do I deal with the cross-validation? Should this be handled via a web applet? What is the logic that I need in place? 2. How do you validate reports (csv xls) that are imported to a table? |
Collation changes after some time Posted: 29 Aug 2013 02:28 PM PDT I have a table which is being targeted by outside script to write stuff into. Default collation if |
Upgrading MySQL 5.1 to 5.5 on Centos 6 without removing dependencies Posted: 29 Aug 2013 06:59 PM PDT I'm trying to upgrade MySQL 5.1 to 5.5 using yum/rpm combo (yum to remove MySQL 5.1 and rpm to install MySQL 5.5). Most instructions I've found suggest to do first: However doing so will remove in my case lots of other packages which I definitely don't want to remove: How can I upgrade MySQL without removing those packages? Can I somehow remove MySQL 5.1 without dependencies (well, maybe besides mysql-devel) and get away with it? (without libraries mismatch). I've found one blog that suggest to upgrade MySQL in yum shell, however I'm not sure if it would change anything? : |
How do I get security information out of an MKF database? (FileNet Security Database) Posted: 29 Aug 2013 01:40 PM PDT I'm trying to get the security information for all the documents in our FileNet system. It is stored in a MKF database in hexidecimal format. There's a decode command that will display the group names that have read, write and execute on the document, but I can't figure out how to get that data out of the system. Does anyone here have any FileNet conversion experience? |
Target database memory exceed available shared memory Posted: 29 Aug 2013 03:09 PM PDT I am trying to install Oracle 12c on a RHEL 6.4 system with 64GB of memory. When I try to enable automatic memory to 48GB, I get the error: What do I need to change to allow Oracle to use more than 32189MB? [INS - 35172] Target database memory (49897MB) exceeds available shared memory (32189MB) on the system. /etc/sysctl.conf contains the following so I'm not sure why it won't allow me to use more than 32G of shared memory limits.conf contains the following: |
Transaction log autogrowth duration Posted: 29 Aug 2013 12:09 PM PDT I am using event notifications to capture data and log file autogrowth events for all databases on my servers. I'm using the data to for analysis of database storage configuration. In looking at the data I've noticed that the average duration for transaction log growth is well above anything I would expect which leads me to think that I'm either misinterpreting the data or overlooking something related to how transaction log autogrowth works. This is an example of a log file growth event that was captured today: Since duration is reported in miliseconds I'm reading this as it taking 69.54 minutes to grow the file. Autogrowth for this log file is set to 512MB (limited to 2TB) All of the databases log to the same volume which is connected to a SAN via fiberchannel. (I'd have to get with our SAN admin for more details on the storage config if needed). The instance is SQL 2012 Enterprise, server is Windows 2008 R2 Enterprise. Why would it take over an hour to grow the log by 512MB? We're not noticing a delay in operations on any of these databases (unless we're just overlooking it). There are a handful of other databases with similar duration; their autogrowth settings are the same. Other databases with smaller autogrowth settings have proportionally smaller durations. |
Getting the most out of SQL Server with a million rows a day Posted: 29 Aug 2013 02:06 PM PDT I have a windows service that is inserting 1000 rows at 1 minute intervals to a SQL Server 2008 Standard database table that looks like:
key_id is a foreign key to another table that looks like I'm building a localhost website reporting interface in PHP. I will be doing queries from this interface like:
And I want it to be as fast as possible. How should I tune SQL Server to be performant in this case? |
SSIS Connection String Expression at Run Time and during Execution Posted: 29 Aug 2013 07:50 PM PDT I have an Excel connection manager which should write to a file which does not exist prior to runtime but rather is created during execution. I have an expression set for the connection string property of the excel connection manager which points to the file that will be created. The problem I am having is that since the file does not yet exist (prior to execution), when I go to run the package it is giving me an error since it can not find the file. If i create a dummy file and place it where the excel connec manager is pointing the package will run ok. What I have tried to do is point the excel connection manager to an existing file prior to run time since it will not give an error and then hopping it will then pick up the new Path (excel connection string) from the Expressions but what is happening is that since I have an expressions set for the connection string property it will not let me overwrite the connection string property, it keeps defaulting to the value of the expressions which contains other variable that are populated during run time so that is where I am getting my error. The expressions string prior to runtime is not pointing to an actual file because it does not yet exist but mainly because the expressions is not fully populated until runtime since it also contains variables. As of right now it seems like I might have to just point to a dummy template file (without using any expressions) prior to runtime and then maybe use a Script to change the connection string property of the excel connec manager to = a variable containing an expression containing the path to the file.... Does this sound like the way to go or am I over complicating things? |
Deadlock from mysqldump --single-transaction, is it possible? Posted: 29 Aug 2013 02:39 PM PDT Is it possible to get a deadlock if you ran mysqldump with the argument --single-transaction ? or any other complication ? If it is used to backup a live site with many other querys running at the same time. Extra info: The mysqldump process could take over few minutes to complete. The tables are innodb. EDIT I am concerned about the deadlocks errors that could happen on the live applications relying on the database during the time where mysqldump is running. |
Why would running a SQL query overnight break my database for the following day? Posted: 29 Aug 2013 12:23 PM PDT I apologise in advance if this is a duplicate question (which I bet it will be). I had a good search through and found similar questions but nothing that seemed an exact match. I wrote a report that takes around 30 minutes to run. It does a lot of number crunching and I am quite happy that it takes so long to execute but this means the report can't be interactive. Basically the report has to calculate a profit and loss report as if the company has lost exactly one client. Then it puts that client back and runs the report again as if the next client was lost, etc. I think 30 minutes is actually rather good performance considering how intensive this is. Users are happy with this report being updated nightly and then they can see the cached results the following day. So the plan was to run the report overnight and save the output somewhere. I wrote a stored procedure to do this and I can run it during the day in around 30 minutes. However, when the report runs overnight as a SQL Agent job it NEVER completes. It kicks off at 9pm and when I get into work the next day I will find my inbox flooded with emails saying that SQL Server is broken, isn't performing properly, etc. I kill the job and everything goes back to normal. Except it doesn't go back to normal. What happens next is that ANY report that is run against the same database will timeout. The only way to correct this is to either wait a day (without running the overnight report) or to run another query against the same database directly from SSMS as this seems to reset the problem for some reason. Obvious questions:
My personal opinion is that the query engine is making bad decisions about which index to use but when I run a report interactively from SSMS this somehow resets "something" internally and it goes back to running reports using the right indexes again. I have no evidence to back this up with apart from the behaviour is the same if I disable one of the indexes on one of the "raw data" tables. When I say, "run a report interactively" I just mean this: Where FormatMonthEndReport is a table-valued function. I don't know if this is even relevant but I make a lot of use of APPLY in all of these reports. I imagine more detail will be asked for but I don't want to add too much detail at this stage as this would rapidly turn into a wall of text. I will come back and see what is asked for first so my detail is more targetted. Edit - it looks like running a SQL Profile is going to be the first step. I am going to set this up to run overnight and then report back tomorrow. I need to leave so I won't have time to respond to the other suggestions until then... but I am not ignoring you! |
Slow query and Insert with trigger Posted: 29 Aug 2013 01:03 PM PDT I have vehicle tracking system application based on PHP, SQL Server 2008 R2 Enterprise Edition with XEON HP Z800 Server. My database is growing day by day in millions and it makes it very slow to select and insert, I have one main table name TraceData that contain all records for each signal from devices every second more records coming in this table, but when we trying to generate History or report for any vehicle it got stuck and take very long some times more then 5 mins the structure of my table is mentioned below: I have one trigger on it also that's any help will be really appreciated. |
Replicating from master to slaves and from slaves to master Posted: 29 Aug 2013 04:27 PM PDT I have a master and multiple slave machines. All SQL operations on the master node should be sent to the slave nodes. But the tables on the slave nodes may also be written to locally. These local changes on the slave node should be reflected to the master node. For example let's say I have a master machine, lets call it "M", and two slave machines lets call them "A" and "B" All these machines have a table named test_table with a column named "id". I insert data to the test_table on M machine. now this change is reflected to the slaves: Now on slave A, I make a local change. Now this change is reflected to the master server: And then master server replicates this change to the slaves: What is this kind of replication named? And how can I achieve this on postgresql? |
how to create an incremental database copy in postgresql? Posted: 29 Aug 2013 04:18 PM PDT Virtual machine software like VirtualBox allow one to make incremental VM clones. That is data, once "touched" (opened writable), will be copied and stored in the incremental cache of the new clone. I am searching for the same option for a Postgres DBMS. How can I setup an incremental database copy, where entries are read from the original database, and touched/modified rows from the local copy? If not on the DBMS level, how can I emulate such behavior at the file-system/storage level using a separate DBMS instance? Background: The idea is to utilize the powerful database-server yet without incurring much resource overhead for a staged/developer database-copy. Feel free to edit the subject or post to improve clarity. |
.bak file not visible in any directory Posted: 29 Aug 2013 04:17 PM PDT I have a .bak file created today by someone else, manually created through SSMS 2008 R2. I'm trying to manually restore the database, unfortunately the file isn't appearing when I go to browse it. I can script the restore process, but I've seen this problem before and I'm not sure what could cause the .bak to not appear. |
Posted: 29 Aug 2013 02:17 PM PDT I have a complicated database structure and am trying to use it to retrieve Records based on multiple selection criteria from several tables. As a general rule, is it better to attempt to use correlated sub-queries to check the state of flags in other tables to determine Eligibility, or am I better of creating Views that utilize Joins that represent the valid Records? As a more concrete example, is this: Superior or Inferior to something like this: -- Edited -- As a corollary question: Is it productive to create Views which contain Intermediary validations? IE, if I repeatedly need to check if And then later use the existence of a record in T1Validated in subsequent checks, or is that likely to produce additional database page retrievals and/or table scans? |
Query to get reposts from people the user is following Posted: 29 Aug 2013 01:17 PM PDT I have a table | post_id | post_user | post_content | post_date | the users table is as follows | user_id | username | user_joined | user relationship table is as follows | follower | following | rel_date | this is the query I am using to get the posts from people that user is following to show them.
Now I want users to share posts, for which I created a table repost_user as follows | repost_user | original_post_user | post_id | repost_date | I want to get posts from people that user following, which includes reposts too.. How do I do this? EDIT : How my resultset should look
for eg if its normal post the row should look like
if its a repost the row would be
|
Need to install Oracle Express 11g Release 2 on a Windows 7 64-bit laptop Posted: 29 Aug 2013 07:17 PM PDT I need the Oracle 11g Release 2 sample schemas (HR, OE, etc.) in order to do most of the available online tutorials. I was hoping to install Oracle Express Edition on my Windows 7 laptop to get these; but I have never heard of anybody successfully installing Oracle XE on a 64-bit Windows platform. Is there a version of Oracle XE 11g R2 available for Windows 7? And if so, could you please point me to it? Thanks... |
Help my database isn't performing fast enough! 100M Merge with 6M need < 1 hour! Posted: 29 Aug 2013 06:17 PM PDT I have a server right now receiving more raw data files in 1 hour then I can upsert (insert -> merge) in an hour. I have a table with 100M (rounded up) rows. Table is currently MyISAM. The table has 1000 columns mostly boolean and a few varchar. Currently the fastest way i've found to get the information into my DB until now was: Process raw data into CSV files. Load Data In File to rawData Table. Insert rawData table into Table1. (on dupe key do my function) Truncate rawData Repeat. Worked fine until im merging 6M+ Rows into 100M rows and expecting it to take under an hour. I got 16G of ram so I set my Key_Buffer_Pool to 6G. I have my query cache pool to 16M I have my query cache limit to 10M I would just replace the information however it has to be an Upsert, Update the fields that are true if exists and insert if it does not. Things im looking into atm; - Possibly switching server table to InnoDB? |-> Not sure about the performance, as the insert into an empty table is fine, its the merge that's slow. Maybe allowing more table cache? Or even Query Cache? mysql sql mysqli innodb myisam Merge Code:
To compare my 2 bool columns. Update
|
USER_ID field in alert logs (also in V$DIAG_ALERT_EXT view) Posted: 29 Aug 2013 12:17 PM PDT Does anyone know what triggers the USER_ID field in the log.xml to be populated? The value also exists in the V$DIAG_ALERT_EXT view. I've found by observing the logs that if a temp tablespace fills up, it will log the USER_ID of the problematic SQL statement causing the issue. But other than that, it appears that value is always NULL. |
Idle connection plus schema-modifying query causing locked database Posted: 29 Aug 2013 05:51 PM PDT As part of our automated deployment process for a web app running on a LAMP stack, we drop all our triggers and stored procedures and recreate them from source control. It turns out there was a hidden danger to this approach that we hadn't thought about. A few days ago we managed to end up with the database for (the staging version of) our web app stuck in a horribly hung state after the following sequence of events:
What's interesting is that this scenario wasn't caused by any kind of deadlock; it was caused by a sleeping connection implicitly holding some kind of lock that prevented the We've talked the problem over in the office, and there are a couple of hypothetical solutions we saw:
We're not sure if either of the first two solutions we considered are even possible in MySQL, though, or if we're missing a better solution (we're developers, not DBAs, and this is outside of our comfort zone). What would you recommend? |
Bitmask Flags with Lookup Tables Clarification Posted: 29 Aug 2013 08:17 PM PDT I've received a dataset from an outside source which contains several bitmask fields as varchars. They come in length as low as 3 and as long as 21 values long. I need to be able to run SELECT queries based on these fields using AND or OR logic. Using a calculated field, where I just convert the bits into an integer value, I can easily find rows that match an AND query, by using a simple WHERE rowvalue = requestvalue, but the OR logic would require using bitwise & in order to find matching records. Given that I would need to work with several of these columns and select from hundreds of millions of records, I feel that there would be a huge performance hit when doing bitwise & operations to filter my SELECT results. I came across this answer from searching and it looked like it may fit my needs, but I need some clarification on how it is implemented. Is this as simple as creating a lookup table that has all possible search conditions? Example for 3 bits using (a & b) (Edit: Wrong bitwise op) The author mentions that it's counter-intuitive initially, but I can't help but feel I'm interpreting the solution incorrectly, as this would give me a single lookup table with likely billions of rows. Any clarifications on the answer I linked above or other suggestions that would preserve the existing database are appreciated. Edit: A more concrete example using small data.
Any number of flags, from all to none, can be flipped, and results must be filtered where selection matches all (Using exact value comparison) or at least 1 (Using bitwise &). Adding a single calculated column for each bitmask is ok, but adding a column for each bit for more than 100 bits, coupled with how to insert/update the data is why I'm trying to find alternative solutions. |
SQL Server 2012 catalog.executions to sysjobhistory - any way to join them? Posted: 29 Aug 2013 03:17 PM PDT I have exhausted my resources and can't find a foolproof way to join the ssisdb.catalog tables to the jobs that run them. Trying to write some custom sprocs to monitor my execution times and rows written from the catalog tables, and it would be greatly beneficial to be able to tie them together with the calling job. |
SQLite writing a query where you select only rows nearest to the hour Posted: 29 Aug 2013 05:17 PM PDT I've got a set of data where data has been taken approximately every minute for about three month and the time has been stored as a unix timestamp. There is no regularity to the timestamp (i.e. the zero minute of the hour may not contain a reading, 00:59:55 and the next measurement could be 01:01:01) and days may be missing. What I need is the row nearest to the hour, with the timestep rounding to the hour, as long as the nearest value is not more than 30 minutes away from the hour. Where a matching hour could not be found it would be helpful if the query could include a time but no value. I realise I'm asking a lot, but this would be incredibly helpful Thanks for taking the time to read this. James BTW, The table is just PK (autoincrement),timestamp,value, sensor id(FK). I've tried this to get the data out: |
Listing the existing SQL Server Logins and Users Posted: 29 Aug 2013 01:37 PM PDT I know we can check the logins and the users that are defined, using GUI in SQL Server, but am wondering how we can do this check using script. I ran the query below but it shows Principal_id which I'm not sure how to map to get the permission level. So is there any built-in stored proc that can list the logins and the users with their permission level? Thank you. |
slow load speed of data from mysqldump Posted: 29 Aug 2013 11:44 AM PDT I've got a moderate size MySQL database with about 30 tables, some of which are 10 million records, some 100 million. The When I load the data into MySQL on another box, a six-core, 8GB machine, it takes forever. Easily 12 clock hours or more. I'm just running the mysql client to load the file, i.e. directly with the file directly out of mysqldump Clearly I am doing something wrong. Where do I start so it can finish in a reasonable time? I'm not using any switches on either the dump or the load. |
Add Oracle Label Security to an existing Oracle installation Posted: 29 Aug 2013 01:45 PM PDT We are using Amazon EC2 to host some Oracle database instances while we are evaluating and prototyping some software. We are using the AMIs provided by Oracle - specifically, Oracle Database 11g Release 2 (11.2.0.1) Enterprise Edition - 64 Bit (Linux). I now need to do some work based on Oracle Label Security - OLS. It appears that when Oracle was first installed into an AMI, the Oracle Label Security option was not enabled. So I want to add OLS to an existing installation of Oracle. I have tried following the installation instructions in the Oracle Label Security Administrator's Guide, using media downloaded from following the download links on the Oracle home page. However the instructions don't match up with what I see when I run the installer - according to the instructions, on the second page of the wizard I should see an Advanced Installation option, but I don't see that - I can only see three radio buttons, "Create and configure a database", "Install database software only" and "Upgrade an existing database". I tried to muddle through the instructions but there were several inconsistencies, as if I was running a different version of the installer to the documentation writer. The AMI, documentation and media are all Oracle Database 11g Release 2 (11.2.0.1) so I don't know why they don't all seem to match. I tried to proceed with an installation anyway, and did find the checkbox to install Oracle Label Security. However it would not accept my existing ORACLE_HOME as an installation destination, telling me: [INS-32025] The chosen installation conflicts with software already installed in the given Oracle home. So, in short - is it possible for me to add the OLS option to an existing installation? If so - how? If not - how do I make a new installation and attach the existing database to the new installation? Edit 2011-05-23After doing some more poking around, I found a copy of the Oracle installer at Then, the first odd thing: it gave me a long list of installable products. Oracle Label Security was in this list, but it was already marked as "Installed". However if I clicked the Installed Products... button for information, Oracle Label Security was not present. I selected Oracle Label Security anyway and it changed to say "reinstall". But on clicking Next I got this error: |
How can I track database dependencies? Posted: 29 Aug 2013 04:32 PM PDT As internal applications evolve over a number of years, you occasionally find there are a number of tables that people believe are no longer relevant and want to cull. What are the practical methods for identifying database dependencies, both within the SQL environment, and maybe onward into things like SSIS? I've worked places where fairly brutal options have been taken such as:
I appreciate that SQL Server comes with tools for tracking dependencies within that instance, but these seem to struggle if you have databases on different instances. Are there options that make it easier to query dependencies, maybe answering questions like "Where is this column used?" with answers like "Over on this other server in this stored procedure" or "Over in this SSIS package"? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment