[how to] Is this table in 3NF? |
- Is this table in 3NF?
- Can I drop and recreate a system extended stored procedure?
- How to flatten results of a table with two related "many" tables?
- Is it possible to select internal InnoDB columns?
- SSIS Suddenly Requires DCOM Permissions
- How to override SSIS 2008 package config file path?
- Why would SequelPro only import 23k rows out of 130k?
- PL/SQL SUM - ORA-00937: not a single-group group function
- What is the future for system and database administration [closed]
- Why some of mysql system variables are readonly
- Does SQL Server place shared locks on scanned records when using REPEATABLE READ [duplicate]
- Does SQL Server place shared locks on scanned records when using REPEATABLE READ
- Header and line item data source mismatch
- Restoring a database backup to a local machine in MySQL
- Simple MySQL query randomly takes forever
- Backing up file groups with restore to earlier backup SQL Server 2008
- problem running xp_cmdshell when Application Name set in connection string
- What could cause the wrong ID to be inserted?
- How to migrate data from SQL database to MySQL including flat files?
- SQL Server replication conflicts after migration from 2000 to 2008
- Tools and methodologies to keep to DBs aligned
- How to connect to a Database made by Oracle SQL Database?
- replicate-wild-ignore-table does not stop already logged queries on slave
- How do I display subtotals in cube browser in SSAS 2012 (Sql Server Data Tools)?
- Bulk insert into SQL Server from VMWare guest using distributed switch
- Merge Replication SQL
- Migrating MySQL to new server
- MySQL: Lock wait timeout exceeded
- MI Data Warehouse Advice
Posted: 22 Apr 2013 07:23 PM PDT I have a table
|
Can I drop and recreate a system extended stored procedure? Posted: 22 Apr 2013 05:29 PM PDT I am having a problem with one of my servers where I cannot execute xp_cmdshell under certain conditions. The full problem can be found here problem running xp_cmdshell when Application Name set in connection string I want to see if dropping and recreating it will fix it (on a test server first). I'm grabbing at straws here. |
How to flatten results of a table with two related "many" tables? Posted: 22 Apr 2013 03:39 PM PDT I've reorganized some tables in my database to be more flexible but I'm not really sure how to write the SQL to extract meaningful data from them. I have the following tables (somewhat abbreviated for a clearer example): With the following data: The objective is to get a query that is flat for a loan with all its fields. (In the real world there will likely be between 20-30 fields for the same schema, but we just have 2 in the example): I cannot use a pivot that references the 'First Name' and 'Last Name' because I'll have no idea what will actually be there. I have a SQL Fiddle here with schema already in place. How can I get the desired result? |
Is it possible to select internal InnoDB columns? Posted: 22 Apr 2013 02:01 PM PDT I have read that InnoDB uses a few fields internally for all rows, such as the hidden columns DB_TRX_ID and DB_ROW_ID. Is there any easy way to see these values, such as within a select statement? |
SSIS Suddenly Requires DCOM Permissions Posted: 22 Apr 2013 02:18 PM PDT I have recently deployed some SSIS packages. I made a job to run them one after another. However, I am now allowed to use MSDTC. So I chained the jobs together like I have done before, but now I am faced with an error message about DCOM. All Internet research points to insufficient permissions on the MSDTC component, but I am worried this is going the wrong way. Any idea how I can "reset" the need for MSDTC for this package? |
How to override SSIS 2008 package config file path? Posted: 22 Apr 2013 01:29 PM PDT I have an SSIS 2008 package with a single configuration named Config. I would like to use the According to MSDN we should be able to do this using the /SET command as it mentions here:
I've tried: and it fails with:
I've also pasted a redacted version of my .dtsx file here in case anyone thinks I'm just mistyping something. And here is the contents of config that I'm trying to change it to. I realize there are other ways of doing indirect configs using environment variables, etc... but that is not what I'm interested in so please do not offer up those suggestions. I'm specifically trying to use the /SET command since Microsoft has explicitly stated that we should be able to. Due to the way our environment is setup this method would be the best for us. |
Why would SequelPro only import 23k rows out of 130k? Posted: 22 Apr 2013 02:18 PM PDT I use SequelPro for mysql on a Mac OS X -- and I used the import function to upload a 130k .csv file to my database. Everything seems to work fine, then I get the message When I hit "Ok," everything else seems to work relatively fine -- I'm just missing about 107,000 rows. Any idea as to what it could be -- maybe if I use something other than auto-detect during the import? I thought that it might have been some extra commas floating around in the actual .csv file, which their were, but I got rid of those and the same thing happened. Thanks! -- Edit: I also don't know if this is an option for everybody, but since the problem stems from exporting from a Windows machine to a Mac then opening it in Excel for Mac -- the problem seems to go away if I just used OpenOffice instead. |
PL/SQL SUM - ORA-00937: not a single-group group function Posted: 22 Apr 2013 01:48 PM PDT For every journey want to sum them to work out the distance between. Tables are as followed: However I am getting the following error: I am certain this is something simple to resolve. |
What is the future for system and database administration [closed] Posted: 22 Apr 2013 10:54 AM PDT Let me present myself for better understanding of my challenge. I love coding, it's been 5 years that I've been in a company as software developer. But for the past 3 years I've been quite pushed into deployment and reporting (development and configuration scheduling etc) because I have developed some love for linux system (ubuntu) that I currently use myself as main operating system. I've currently inherited management of 4 online dedicated linux servers and I learn as I go since I've not considered it as main job. This morning I was asked whether I would like to be in engineering operations department that is going to be created and as much as I love coding and these deployments and database tweaks. I'm of course interested but then I would not want to shoot myself on the foot as I don't have enough information about the field, if it's a dying field as computer are getting more and more smarter. I would like those on the field to throw in insight. I however have more direct questions:
thanks |
Why some of mysql system variables are readonly Posted: 22 Apr 2013 11:18 AM PDT There are almost more or less 277 mysql server system variables . In these variables some of the variables are dynamic and few are not. I don't under stand what is the reason behind read only variables . why few of mysql system variables are read only. what will happen if they make those variable dynamic. Is there any deep reason for the read only variables? For now we can consider :
and there are many other variable we can find at http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Of course, I know we can't change variable like |
Does SQL Server place shared locks on scanned records when using REPEATABLE READ [duplicate] Posted: 22 Apr 2013 10:53 AM PDT This question already has an answer here: Assume a In this session I start a transaction and execute an If I examine the locks placed under this transaction after running the My question is: shouldn't the database engine place shared locks on all the rows it read to ensure If I execute a Can anyone help me understand this situation? I've tried with both SQL Server 2008 R2 and 2012, same behavior in both. |
Does SQL Server place shared locks on scanned records when using REPEATABLE READ Posted: 22 Apr 2013 10:52 AM PDT Assume a In this session I start a transaction and execute an If I examine the locks placed under this transaction after running the My question is: shouldn't the database engine place shared locks on all the rows it read to ensure If I execute a Can anyone help me understand this situation? I've tried with both SQL Server 2008 R2 and 2012, same behavior in both. |
Header and line item data source mismatch Posted: 22 Apr 2013 10:35 AM PDT I'm working with a star schema for a data warehouse and I am running into a problem with header and line items from different data sources. I have data sources A and B which represent the same data in different ways. Data source A contains headers and line items, but it only has the net outcome (Total Cost) in the header. Data source B contains only line items and each item has an outcome (Cost). I could keep two fact tables (one for the header and one for the line items), but I have researched and it seems inadvisable. Is there a way to deal with this kind of mismatched format? My current strategy:
Here is an example for a Here is an example for a |
Restoring a database backup to a local machine in MySQL Posted: 22 Apr 2013 11:50 AM PDT I have downloaded my I want to restore the back up in my this machine. I used: Where database is my new database name in this machine. My confusion is that my |
Simple MySQL query randomly takes forever Posted: 22 Apr 2013 02:08 PM PDT I'm having an issue on my server with a simple PHP send emails script that runs every minute via a cronjob. The script contains a MySQL query that runs very fast most of the time, but randomly will take "forever". It feels like a similar problem to this: http://stackoverflow.com/questions/976739/mysql-query-randomly-takes-forever At seemingly random times while the MySQL query is running the load on my server spikes from an average of 0.5 up to 8 and my entire server slowly grinds to a near halt. The query does eventually complete but sometimes it runs for over 10 minutes -- on a "normal" occasion the query completes in 0.2 seconds. This is the query:
I've isolated it to this query because my server trucks along nicely without the load average spikes when this cronjob is turned off. My server is a VPS with 2GB of RAM. The database isn't huge (750mb in total) but the email_messages table being selected from is by far the largest at around 400mb. Through research, I thought it was a MySQL query cache problem because query cache was turned on. But turning off query cache has not resolved the issue. I turned off query_cache by setting Any ideas at all as to what could be going on here? [CONCLUSION] I did an experiment by including a condition in the query using my primary key so that only the last 1000 table entries where being search through. Running the cronjob with this condition resolved the issue, and further investigation in the mysql console with I now need to read up on how to use HeidiSQL to properly create indexes because HeidiSQL is telling me there is an index key on |
Backing up file groups with restore to earlier backup SQL Server 2008 Posted: 22 Apr 2013 04:45 PM PDT I have a task: I have created a database with a filegroup I need to extract a csv into a table in a file group named lamb and backup file group lamb Then extract another csv file again and back up again But I need to be able to restore the first back up What is the backup and restore method ???? Is it possible to only restore the lamb file group ??? |
problem running xp_cmdshell when Application Name set in connection string Posted: 22 Apr 2013 02:50 PM PDT I have an app on 10 or so servers that runs a few xp_cmdshell statements against MSSQL 2008. It works fine on all the servers except for one. To make things worse, I can run all the commands in SQL Management Studio, but in the app, they don't work. I even made a temporary app to test and it works fine! But in the deployed app, I get a simple SQL error "Access Denied". I have narrowed it down to the connection string, if I include the application name It throws access denied only when calling xp_cmdshell, normal SQL statements works fine. But if I remove the application name It works fine for both normal SQL statements and calls to xp_cmdshell. The strange thing is, its only happening on one of the ten servers. The only difference being that the server has SP1 and the others do not. @@VERSION returns I am thinking that there is some sort of authentication that can be granted to applications, but I cant seem to find anything. I can replicate it in SQL Managment Studio by adding to the Additional Connection Parameters tab on the Connect to Database Engine dialog when you create a new query or change its connection. The simple test statement I use is If anyone could shed some light on what is happening it would be much appreciated. EDIT: OK after a bit more investigation its even more confusing. If I set the Application Name to the following which is the default for .Net connections, it works fine. I can run xp_subdirs without any problems no matter which settings I use Now this is where it gets really weird. The first two fail, but the last one succeeds with the application name being set to my application name. But only if its xp_cmdshell that get called, xp_subdirs works with all three. With Application Name in connection set With Application Name not set in connection The error returned in the query messages area in SQLMS when it fails This error message is only happening on the one server, I cannot replicate this on any others. |
What could cause the wrong ID to be inserted? Posted: 22 Apr 2013 01:14 PM PDT I have a SQL Server 2008 server (build 10.0.5500). Earlier this week I ran this on a table that already had data in it: When the user went to create a new record later on, somehow an ID of 0 was inserted into the ID column, instead of the 1 SQL Server usually puts in if identity(1,1) is configured for the ID. This caused some weird issues, but clearing the data and running the reseed resulted in a 1 being inserted, as expected. I can't duplicate the issue. For reference, here's the general format for our save sp's: Does anyone know what could cause SQL Server to insert a 0 in the ID when it should have been a 1? |
How to migrate data from SQL database to MySQL including flat files? Posted: 22 Apr 2013 10:03 AM PDT one of my friend has asked some asp developer to make online application where user can upload files also. jpg, doc, pdf and all these files are saved in SQL database as flat file. Now my friend asked me to do everything in php and he want to migrate all the data which he has in SQL datbase to MySQL database. I can do it. text can be migrated easily but the files which are saved in SQL database can be migrated to MySQL as flat files. is it possible? |
SQL Server replication conflicts after migration from 2000 to 2008 Posted: 22 Apr 2013 01:03 PM PDT I got a suggestion over at Stackoverflow to post here....greatful for any and all help. Please bear with me I think this might take a while to explain. For many years now my company has hosted a solution for a client involving a complex web application with smaller mobile solution consisting of IIS 6 for the web app, SQL Server 2000 on its own server and Visual Studio 2005 Pocket PC app replicating with SQL Server via Merge Replication. This whole time the mobile solution has been very solid and did not require many updates so we have replicated with We recently migrated this entire solution as follow:
The new web box received the 64 bit version of SQL Server Compact 3.5 tools and we now call The basic idea of the entire process is that mobile field users get assigned safety inspections to perform on buildings. When a facility in the system needs an inspection an inspection record is created via the web app in the DB. A status flag is set such that the HOST_NAME() is utilized to make sure only records for a given inspector with this particular status will let them show up on their mobile device. The user can synch multiple times in the field sending their data up to the SQL Server/web app and receive more inspections down or other updates such as look up table data...typical merge replication here and has been working great for years. Once the field user changes the status of the inspection, it will travel from mobile device to SQL Server database and be removed from their iPaq. The inspection has additional work flow on the web app from here on out. Now on to the problem. We migrated everything publishing the exact same subset of tables with the same joins/filters. Same settings on the publication as far as I can tell are the same. However; when a user gets a new inspection down to the hand held for the very first time, enters data, then synchronizes back to the database every row has a conflict. Since we have default conflict resolution the publisher wins and the data collected in the field it lost. The inspection now looks blank just as it did when it first came down to the mobile device. If the user syncs again with or without changes on the mobile (subscriber) all is well. Any future changes from the mobile device are intact. It is as if the web/db data is newer then the hand held data. I am 100% sure it is not. I have looked at table triggers, web app logic, etc. We were very careful not to include any application changes to DB/web app/mobile app with respect to data manipulation during this migration. Here is a summary of the order of operation: New row created in the database >> Mobile user receives data >> mobile user updates data >> synchronizes - data is lost. Conflicts show up for all data lost. From here on out any additional mobile changes are captured. Merge replication works in both directions flawlessly. Thanks for taking the time to read please help. I am stuck after 3 days. |
Tools and methodologies to keep to DBs aligned Posted: 22 Apr 2013 02:03 PM PDT 2 DBs having schemas that represent the same semantic objects. The first one is production DB (Non-RDBMS, in-house implemented in-memory DB with shitload of RAM). Other is Postgres. Once in a while the production DB is changed (schema upgrade). Question: what is the best practice to keep DBs of different types aligned semantically? |
How to connect to a Database made by Oracle SQL Database? Posted: 22 Apr 2013 11:03 AM PDT So I am fairly new at this, so if you could keep that in mind in your answers, it would be much appreciated. I installed Oracle SQL Database on my Windows PC. It came in two zip files. I installed it and the online portion of it works fine. I can login with Username: sys and Password: **. What I am trying to do is connect to this newly created database on another computer through SQL Developer. I have read that in order to do this, you need to change the hostname of the Database from "localhost" to an IP Address. How do I do that and is there anything else I need to do to make this work? I also found this LINK. Is this something I should do? I do not have a Domain though. listener.ora tnsnames.ora |
replicate-wild-ignore-table does not stop already logged queries on slave Posted: 22 Apr 2013 12:03 PM PDT We have a master DB where where multiple queries where fired on a specific table, these queries were taking too long to execute on slave and it was lagging behind and as a result of which queries were queued and executing one by one. Later we identified that this table was not required to be replicated and we added this table to replicate-wild-ignore-table on the slave's my.cnf and restarted the slave. But the slave again picked up the remaining queries for that table. Ideally it should have skipped the queries for that table. We are not able to figure out why this is happening. Can somebody explain? |
How do I display subtotals in cube browser in SSAS 2012 (Sql Server Data Tools)? Posted: 22 Apr 2013 05:03 PM PDT Apparently this feature was available in previous versions but has now been removed. What are some alternatives? |
Bulk insert into SQL Server from VMWare guest using distributed switch Posted: 22 Apr 2013 09:57 AM PDT This is mostly likely not a SQL server issue but the setup seems to only be affecting BULK INSERTS to SQL Servers. We have recently moved VM Hardware and all the guests that were moved had their virtual switches changed from standard to distributed. I then started receiving
on a two SQL servers during BULK INSERT operations. One of the SQL servers was a VM with the new configuration and the other was a physical server. Both BULK INSERT operation originated from a VM with the new configuration. The BULK INSERTs would not fail every time, it was very random when it would. When we changed the virtual switch to be a standard switch instead of a distributed switch the issue goes away. I am looking for more of an explanation to why it doesn't work with a distributed switch instead of a resolution. My guess would be that the BULK INSERT operation is serial and with a distributed switch the packets are being routed through different hosts, some of which may be busier than others, and are arriving at the destination server beyond some latency threshold. (note: there is nothing in the windows event log at the times of the errors on either the source or destination server) |
Posted: 22 Apr 2013 04:03 PM PDT I am currently having a problem with the following Creating a merge replication is simple if the servers are on the same network, i would like to know how can i do a merge replication if the servers are on seperate networks I am using SQL 2008 R2 |
Posted: 22 Apr 2013 06:03 PM PDT We have a MySQL 5.1 instance running on Windows Server 2008 R2 x64 (Server A) replicated to another MySQL 5.1 instance on Windows Server 2003 R2 (Server B). We have a new MySQL 5.5 instance on another Windows Server 2008 R2 x64 machine (Server C). I want to replace Server A with Server C. My plan is to set up Server C as another slave to Server A by 'cloning' Server B like this: http://www.redips.net/mysql/add-new-slave/. After that I plan on doing this: http://blesseddlo.wordpress.com/2011/06/03/convert-mysql-slave-to-master/. After that I want to change Server B's master to Server C. Is this a viable approach? Are there caveats? May there be any compatibility issues when setting a MySQL 5.5 instance as slave to a MySQL 5.1 instance? I can't just stop everything and use |
MySQL: Lock wait timeout exceeded Posted: 22 Apr 2013 03:03 PM PDT I have a developer that has been trying to alter a large table (~60M rows). Via LiquidBase/JDBC they're running Today while it was running I checked in on it periodically; everything looked normal, the query was running, in state "copying to tmp table", I could see the temp table on disk getting larger and larger (still plenty of free space). I could see other processes blocked waiting for this table to be unlocked. Finally after about 4.5 hours, they got the "lock wait timeout exceeded; try restarting transaction" error. This is actually the 2nd time they've tried, and it seems to fail right about when I would expect it to complete. innodb_lock_wait_timeout is set to the default 50, I can't imagine it would run for so long to be affected by this. No errors logged, no deadlocks or other weirdness seen in 'show engine innodb status'. Can anyone help me with other ideas? I'm fairly stumped on this one. thanks |
Posted: 22 Apr 2013 07:03 PM PDT I have recently started a new job and part of my remit is to try to rescue the Management Information (MI) Data Warehouse. I use the term Data Warehouse very loosely here! The server setup is:
The disks split in to 3 drives:
These are the observations I have made regarding the database:
Importing data The data is imported using batch files and OSQL. It is slow, clunky and prone to failure (It has failed 4 times and I have only been there for 2 and half weeks) The logging is also poor. So apart from all that, it is perfect... I need to find a way to fight my way out of this mess but I am not sure how to go about it. Ideally, I want to be able to:
The main issue at the moment is the performance. I have created a new filegroup on drive D: (where the log files are stored) and placed a few non clustered indexes on there. I am being slightly cautious as I don't want to increase the import times as these are taking too long as it is! I wanted to partition the larger tables but partitioning is not included in Standard, it is an Enterprise feature. I realise that this is a pretty huge task and I am not looking for a magic fix but a little guidance on how to attack this would be a great help. EDIT: I should also point out that there is no test or dev environment for this either... |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment