[how to] How can a database be normalized when an optional field causes denormalization? |
- How can a database be normalized when an optional field causes denormalization?
- Does an index with multiple columns make a similar index redundant?
- Percona xtrabackup Prepare Fails
- Postgres 9.2 select multiple specific rows in one query
- Managing version control of Erwin file changes with multiple environments
- JOIN to read first table and get possible relationship in the second table
- How to get a row_number to have the behavior of dense_rank
- Can I decrypt TDE transaction log files for use with 3rd party software?
- XML data source with XSD validation importing the same data despite different files
- Why do linked servers have a limitation of 10 branches in a CASE expression?
- Generate List of Missing Relationships
- Troubleshooting SOS_SCHEDULER_YIELD wait
- How to do an in-place upgrade of SQL Server 2005 Express to SQL Server 2008 R2 Express
- How do I turn off ALTER DATABASE script output in VS 2012?
- How to achieve this sql server replication scenario?
- Is it "legal" to CREATE and DROP #SomeTable more than once?
- The Login is from an untrusted domain - domain setup with computer level login
- Connection pools being reset with Error: 18056, Severity: 20, State: 46. & Perfmon Counters not showing
- How to override SSIS 2008 package config file path?
- Why would SequelPro only import 23k rows out of 130k?
- Why some of mysql system variables are readonly
- MySQL server crashed.
- SQL Server replication conflicts after migration from 2000 to 2008
- How to restore a filegroup from its backup in SQL Server
- Will a primary key be added as a clustered index?
- Tools and methodologies to keep to DBs aligned
- How to connect to a Database made by Oracle SQL Database?
- MySQL: Lock wait timeout exceeded
- Do you know an easy way to generate one record for each hour of the past 12 hours?
- 3rd party dll in SQL Server CLR
How can a database be normalized when an optional field causes denormalization? Posted: 22 May 2013 08:34 PM PDT Note: I attempted to provide images to illustrate my question, but lack the reputation to do so on this StackExchange site. If someone with edit privileges can embed it with the question, it would be much appreciated. Suppose you have a warehouse full of widgets. Each widget in the warehouse is stored in a specific identifiable location within the warehouse. You might have a schema which looks like the Original Layout <1> in the image link. A widget may also be (optionally) located on a cart in the warehouse, which could lead to a schema like Carts Added to Schema <2> in the image link. However, a problem arises because a cart can also only be located in one location within the warehouse, which turns the prior schema into something like Denormalized Schema <3> in the image link. Howver, this schema is denormalized and can result in anomalies, such as if the data was: widgetID storageLocationID cartID ======== ================= ====== 1 foo A 2 bar A Although both widgets are located on the same cart, the database shows them in different locations which is not possible. If a cart was required, it would be relatively simple to address this normalization issue by simply removing the storageLocationID from the Widgets table. However, because a cart is not required, that solution will not work. How can this data be structured to eliminate the anomalies? |
Does an index with multiple columns make a similar index redundant? Posted: 22 May 2013 06:59 PM PDT Let's say I have a table that looks like this: with an index that looks like this: then for some reason, someone has created a second index: is it definitely safe to delete the first index? Is it just taking up unneccesary disk space? Will the second index cover all cases of the first? The column ordering is definitely "ActivityName" first. |
Percona xtrabackup Prepare Fails Posted: 22 May 2013 06:31 PM PDT I made a backup and according to Percona it completed successfully. Now I'm trying to prepare it and having the following issue: I've been searching for answers for quite a while and not finding anything conclusive. Any advice on how to resolve this? Edit: per comment below, here is requested info: xtrabackup_55 version 2.0.3 for Percona Server 5.5.16 Linux (x86_64) (revision id: 470) |
Postgres 9.2 select multiple specific rows in one query Posted: 22 May 2013 04:16 PM PDT I have a table with three columns,
I want to select the values based on a range of days ie. from two days ago until now. The table may contain one or two rows per day, if it does contain two rows, I want to select the second row plus all the rows following it in the period of time. To clarify: If I wanted to select the values from day 26 (only the second row) plus all the values until day 29 and so on, can it be done in one query? |
Managing version control of Erwin file changes with multiple environments Posted: 22 May 2013 03:01 PM PDT What is the easiest way to manage three environments (DEV, QA and PROD) within a single Erwin file or multiple Erwin files? The Erwin file is separated by subject area, so I'm not sure reverse engineering is a viable option. We perform migrations every day, but they don't always conform to what is in Erwin, so we have a difficult time showing our end-users what is in QA and what is in PROD. |
JOIN to read first table and get possible relationship in the second table Posted: 22 May 2013 02:30 PM PDT Consider a main table as and a second table for relationship as How to get a full list of all |
How to get a row_number to have the behavior of dense_rank Posted: 22 May 2013 02:06 PM PDT I have a stored procedure will be used to page through data. One of the requirements of the procedure is to have a parameter that will be used to sort two columns of data, the sorting should be used when applying the Some sample data: The procedure and the sorting is currently working as written. But an issue has cropped up with the way that we are applying the For example, if I run the following query: See SQL Fiddle with Demo (a trimmed down stored proc is also present) I get a result of: But the desired result is: As you can see the |
Can I decrypt TDE transaction log files for use with 3rd party software? Posted: 22 May 2013 06:22 PM PDT Is there a way to decrypt TDE log files on disk for use with 3rd party software? I have the original key and certificates. I am trying to use the transaction files with a 3rd party tool in order to recover some lost data. |
XML data source with XSD validation importing the same data despite different files Posted: 22 May 2013 03:25 PM PDT Please bear with me, I've never used XML as a data source, nor am I familiar with XML itself, so I'm not entirely familiar with the nomenclature. I have a series of XML files with data to be imported to a database and the files are being validated with a single XSD file. I have the file names of the XML and XSD files statically set as variables and file connections set with the variables on the ConnectionString property. Prior to importing the file, I run the XML files through an XML Task with the operation type as Validate, using the XSD file as the SecondOperand. I have the SourceType and SecondOperandType of the XML Task set to the file connections of the XML and XSD files, as stated above. The first file I imported worked fine, the data matches up perfectly. However, upon moving on to the second file, it imported the same data from the first file, leaving me with duplicate rows in the database. I double-checked the file name variable and the contents of the file itself and they are both correct, so I'm not sure of where the duplication would have taken place. Is there something else that needs to be done to distinguish the data from the second file from that of the first? Does the validation file or process need to be altered in some way? |
Why do linked servers have a limitation of 10 branches in a CASE expression? Posted: 22 May 2013 06:01 PM PDT Why does this Produce this result?
Clearly there isn't a nested Another oddity. This inline table-valued function produces the same error: But a similar multi-statement TVF works fine: |
Generate List of Missing Relationships Posted: 22 May 2013 06:33 PM PDT I have two tables, I am able to generate a list of every What I need to do is take the data in Example data: Since there is no record for I feel like I need to use something like: But I have no idea how to format it properly to get the information I need. Can anyone provide any insight on what I may be missing, or if I am even in the right direction? |
Troubleshooting SOS_SCHEDULER_YIELD wait Posted: 22 May 2013 01:45 PM PDT Running our corporate ERP (Dynamics AX 2012), I noticed our production environment seemed much slower than our development systems. After performing the same activities in both the development and production environments while running a trace, I confirmed that SQL queries were executing very slowly on our production environment compared to development (10-50x slower on average). At first I attributed this to load, and re-ran the same activities on the production environment during off hours and found the same results in the trace. I cleared my wait stats in SQL Server then let the server run under its normal production load for a little while, and then ran this query: My results are as follows: So seemingly the largest Wait is SOS_Scheduler_Yield by far, and I googled around and found it typically relates to the CPU not being able to keep up. I then ran this query multiple times in succession. I know I'm supposed to be looking for schedulers with non-zero runnable_tasks_count or pending_disk_io_count, but it's basically zero almost all the time. I should also mention that Max Degree of Parallelism was set to 1, since the Dynamics AX workload is typically OLTP in nature, and changing it 8 did not make much of difference in the above wait stats, they became almost the exact same with the same performance problems. I'm sort of at a loss of where to go from here, I basically have a SQL Server that is seemingly CPU strapped but not waiting on runnable_tasks or IO. I do know that the IO subsystem of this SQL Server isn't very good, because running SQLIO on the drive containing the actual databases can lead to pretty low numbers (think 10MB a sec for certain types of reads/write), that said, it doesn't seem like SQL is waiting on that because of the amount of memory on the server caching most of the databases. Here is some environment information to help: Production environment:
My DEV:
I would welcome any input on other things to look for. |
How to do an in-place upgrade of SQL Server 2005 Express to SQL Server 2008 R2 Express Posted: 22 May 2013 02:06 PM PDT I would like to upgrade my SQL Server 2005 Express instance which is installed using this command: but when I am trying to upgrade the server instance the instance When I check ADD/Remove program list of OS, it shows me both SQL Server 2005 and 2008 R2 are installed (that may be I am doing a side by side upgrade but not a in-place one) and when I click on start button of OS and then goes to SQL Server 2008 R2 folder in |
How do I turn off ALTER DATABASE script output in VS 2012? Posted: 22 May 2013 03:02 PM PDT I am using SQL Server 2005 in conjunction with Visual Studio 2012 and a SSDT database project. When I publish my VS project and generate the script to run against my database, VS includes the following in the script. I really don't want the script to go changing database properties, particularly the PAGE_VERIFY setting. I notice that the script doesn't set the properties back the way it found them. How do I turn this off? I have spent time in the Tools->Options dialog and I just don't see the setting. |
How to achieve this sql server replication scenario? Posted: 22 May 2013 07:51 PM PDT I want to achieve below illustrated scenario via sql server replication. Client A and Client B are client databases containing same article Table A. This article needs to be published by Client A and Client B whose changes need to be merged at the server (direction of arrows represent data flow). Server is the destination database where article Table A should contain the changes made by both Client A and Client B. The trio consisting of Server, Client A and Client B are running Sql Server 2008 R2. This is what I've tried in vain. Made Client A and Client B publishers and initially subscribed Server to Client A (went well). Next I tried subscribing Server to Client B (went haywire, found this msdn post where I figured out the reason behind the failure ) I still need to achieve the above diagrammed scenario. Is this even possible? If yes, any ideas how? Thanks for your time. |
Is it "legal" to CREATE and DROP #SomeTable more than once? Posted: 22 May 2013 06:46 PM PDT I've got my code sort of segregated as "coherent blocks" that I can insert into a longer "configuration script" over and over, and one of the patterns I'm using is this: But now SSMS is complaining that the object already exists by the next time the I think it's obvious I'm going to have to declare the table once at the beginning of the script, truncate instead of drop, but it's frustrating, naturally, to not be able to just drop the table and use the same name again. |
The Login is from an untrusted domain - domain setup with computer level login Posted: 22 May 2013 02:54 PM PDT I have a Sql Server 2008 R2 database I'm attempting to connect to using ADO.NET using integrated security in the connection string. The database resides on a 64 bit Windows Server 2003 machine. The client is a 64 bit Windows 7 Enterprise machine. The specific error I receive is : SQL Server 2008: The login is from an untrusted domain and cannot be used with Windows authentication The computer is added as a windows account on the sql server as DOMAIN.edu\ComputerName$ and has been granted full permissions on the target database. Both the server and client are part of the same active directory domain. The computers can communicate with each other over remote desktop. The client machine can see network shares on the server machine, so I don't think network connectivity is a problem. I didn't have a problem with this setup when the Sql Server 2008 R2 database was residing on a 64 Windows XP Professional machine and the client was the same machine. What else can I verify or modify to make my current setup work for machine level authentication? |
Posted: 22 May 2013 04:55 PM PDT We are using SQL authentication & .net 4.0 Connection strings to connect to an Enterprise Edition 2012 SP1 SQL Server on a windows 2008r2 Enterprise Server. We use about 50 Servers split into 8 different groups different parts of a website. Our website is using this SQL Server to log Visit tracking data and over the last few days it has spat out the following messages about the resetting connection pools.
Errorlog reads
After some digging I found this document from CSS blog and this one by the scholar Aaron Bertrand (I know the error number is different but the failure ID is the same with a number of the messages are identical) Failure ID 46, suggests that the login did not have permissions. Our logins default to the master database and the db name is specificied in the connection string. I wanted to check the number of connection strings pools, etc and checked all the counters in Perfmon for ".Net Data Provider for SqlServer" it only gave me the option of defaultdomain9675 for the instance so I selected that assuming that is a system generated ID name for our Datacentre network. Unfortunately all of the counters are reading zero. On one of our other main servers the connection pools are hovering around 10 which is what I expected to see on a healthy server with that kind of load. My question is 3 fold 1- Can anyone suggest why the Windows 2008 R2 Server is not showing ".Net Data Provider for SqlServer"? 2- Has anyone experienced this as I obviously believe that the login not having permissions is a red herring ? 3- If different groups of web servers have the same connection string syntax but with slightly different whitespace, would this cause the server to use another connection pool ? |
How to override SSIS 2008 package config file path? Posted: 22 May 2013 06:09 PM PDT I have an SSIS 2008 package with a single configuration named Config. I would like to use the According to MSDN we should be able to do this using the /SET command as it mentions here:
I've tried: and it fails with:
I've also pasted a redacted version of my .dtsx file here in case anyone thinks I'm just mistyping something. And here is the contents of config that I'm trying to change it to. I realize there are other ways of doing indirect configs using environment variables, etc... but that is not what I'm interested in so please do not offer up those suggestions. I'm specifically trying to use the /SET command since Microsoft has explicitly stated that we should be able to. Due to the way our environment is setup this method would be the best for us. |
Why would SequelPro only import 23k rows out of 130k? Posted: 22 May 2013 07:09 PM PDT I use SequelPro for mysql on a Mac OS X -- and I used the import function to upload a 130k .csv file to my database. Everything seems to work fine, then I get the message When I hit "Ok," everything else seems to work relatively fine -- I'm just missing about 107,000 rows. Any idea as to what it could be -- maybe if I use something other than auto-detect during the import? I thought that it might have been some extra commas floating around in the actual .csv file, which their were, but I got rid of those and the same thing happened. Thanks! -- Edit: I also don't know if this is an option for everybody, but since the problem stems from exporting from a Windows machine to a Mac then opening it in Excel for Mac -- the problem seems to go away if I just used OpenOffice instead. |
Why some of mysql system variables are readonly Posted: 22 May 2013 04:09 PM PDT There are almost more or less 277 mysql server system variables . In these variables some of the variables are dynamic and few are not. I don't under stand what is the reason behind read only variables . why few of mysql system variables are read only. what will happen if they make those variable dynamic. Is there any deep reason for the read only variables? For now we can consider :
and there are many other variable we can find at http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Of course, I know we can't change variable like |
Posted: 22 May 2013 03:09 PM PDT Help! I managed to crash MySQL last night. I am on a Mac using the native version that came with Mountain Lion. I was upgrading from 5.5 to 5.6. I have followed instructions in this forum to delete the installation, but trying to re-install 5.5 says that there is a newer version and won't install. Trying to install 5.6 fails. I found this error in the console: Help me please ?? I am stuck and in a world of hurt and despair. |
SQL Server replication conflicts after migration from 2000 to 2008 Posted: 22 May 2013 05:09 PM PDT I got a suggestion over at Stackoverflow to post here....greatful for any and all help. Please bear with me I think this might take a while to explain. For many years now my company has hosted a solution for a client involving a complex web application with smaller mobile solution consisting of IIS 6 for the web app, SQL Server 2000 on its own server and Visual Studio 2005 Pocket PC app replicating with SQL Server via Merge Replication. This whole time the mobile solution has been very solid and did not require many updates so we have replicated with We recently migrated this entire solution as follow:
The new web box received the 64 bit version of SQL Server Compact 3.5 tools and we now call The basic idea of the entire process is that mobile field users get assigned safety inspections to perform on buildings. When a facility in the system needs an inspection an inspection record is created via the web app in the DB. A status flag is set such that the HOST_NAME() is utilized to make sure only records for a given inspector with this particular status will let them show up on their mobile device. The user can synch multiple times in the field sending their data up to the SQL Server/web app and receive more inspections down or other updates such as look up table data...typical merge replication here and has been working great for years. Once the field user changes the status of the inspection, it will travel from mobile device to SQL Server database and be removed from their iPaq. The inspection has additional work flow on the web app from here on out. Now on to the problem. We migrated everything publishing the exact same subset of tables with the same joins/filters. Same settings on the publication as far as I can tell are the same. However; when a user gets a new inspection down to the hand held for the very first time, enters data, then synchronizes back to the database every row has a conflict. Since we have default conflict resolution the publisher wins and the data collected in the field it lost. The inspection now looks blank just as it did when it first came down to the mobile device. If the user syncs again with or without changes on the mobile (subscriber) all is well. Any future changes from the mobile device are intact. It is as if the web/db data is newer then the hand held data. I am 100% sure it is not. I have looked at table triggers, web app logic, etc. We were very careful not to include any application changes to DB/web app/mobile app with respect to data manipulation during this migration. Here is a summary of the order of operation: New row created in the database >> Mobile user receives data >> mobile user updates data >> synchronizes - data is lost. Conflicts show up for all data lost. From here on out any additional mobile changes are captured. Merge replication works in both directions flawlessly. Thanks for taking the time to read please help. I am stuck after 3 days. |
How to restore a filegroup from its backup in SQL Server Posted: 22 May 2013 06:43 PM PDT I need guidance on how to restore a file group in Database from the File group backup taken from another same DB server. Is it possible to restore the filegroup by running restore DB for File Group. I was trying to restoring filegroup from its back up like below The backup set holds a backup of a database other than the existing When I was trying to restore in to same DB , restoring works perfectly, When I run the Select query against partion of filegroup. it throws error When I see status of all file group. I see one of the File Group MFG 12 has status = "Restoring" , How to set back to Online Mode. When I try to restore log file also.i get below error Please help how do I bring back the missing file group data even though filegroup backup has been taken from another same Database in another server or in the same server DB |
Will a primary key be added as a clustered index? Posted: 22 May 2013 04:27 PM PDT I've inherited a database where no primary keys were defined on the tables. There are also no clustered indexes assigned to the tables. If I perform an alter table to assign a primary key will this result in SQL Server also creating a clustered index? If so should I expect slowness on the database due to IO from the data being repositioned on the harddisk? |
Tools and methodologies to keep to DBs aligned Posted: 22 May 2013 02:09 PM PDT 2 DBs having schemas that represent the same semantic objects. The first one is production DB (Non-RDBMS, in-house implemented in-memory DB with shitload of RAM). Other is Postgres. Once in a while the production DB is changed (schema upgrade). Question: what is the best practice to keep DBs of different types aligned semantically? |
How to connect to a Database made by Oracle SQL Database? Posted: 22 May 2013 04:03 PM PDT So I am fairly new at this, so if you could keep that in mind in your answers, it would be much appreciated. I installed Oracle SQL Database on my Windows PC. It came in two zip files. I installed it and the online portion of it works fine. I can login with Username: sys and Password: **. What I am trying to do is connect to this newly created database on another computer through SQL Developer. I have read that in order to do this, you need to change the hostname of the Database from "localhost" to an IP Address. How do I do that and is there anything else I need to do to make this work? I also found this LINK. Is this something I should do? I do not have a Domain though. listener.ora tnsnames.ora |
MySQL: Lock wait timeout exceeded Posted: 22 May 2013 08:09 PM PDT I have a developer that has been trying to alter a large table (~60M rows). Via LiquidBase/JDBC they're running Today while it was running I checked in on it periodically; everything looked normal, the query was running, in state "copying to tmp table", I could see the temp table on disk getting larger and larger (still plenty of free space). I could see other processes blocked waiting for this table to be unlocked. Finally after about 4.5 hours, they got the "lock wait timeout exceeded; try restarting transaction" error. This is actually the 2nd time they've tried, and it seems to fail right about when I would expect it to complete. innodb_lock_wait_timeout is set to the default 50, I can't imagine it would run for so long to be affected by this. No errors logged, no deadlocks or other weirdness seen in 'show engine innodb status'. Can anyone help me with other ideas? I'm fairly stumped on this one. thanks |
Do you know an easy way to generate one record for each hour of the past 12 hours? Posted: 22 May 2013 05:03 PM PDT I have a report that shows the count of events for the past 12 hours, grouped by the hour. Sounds easy enough, but what I am struggling with is how to include records that cover the gaps. Here is an example table: Data looks like this: I need to create a result set that has one record for every hour of the past 12 hours, regardless of there being events during that hour or not. Assuming the current time is '2012-03-08 11:00:00', the report would show (roughly): I came up with a solution that uses a table that has one record for every hour of the day. I managed to get the results I was looking for using a UNION and some convoluted case logic in the where clause, but I was hoping somebody had a more elegant solution. |
3rd party dll in SQL Server CLR Posted: 22 May 2013 04:41 PM PDT I need to use a third party DLL in a trigger c# code in SQL Server CLR But when I try to add reference it just shows some DLLs from SQL Server. How can I add my third party dll to SQL Server? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment