[how to] Are duplicates ever acceptable in a database |
- Are duplicates ever acceptable in a database
- What to do after deleting a million records
- MySQL 5.6 DateTime Incorrect datetime value: '2013-08-25T17:00:00+00:00' with Error Code 1292
- MySQL / Fusion IO Configuration Question
- Is there a way to implement a cross-database task on MSSQL 2012 with the Availability Groups feature?
- Dynamic form from a database
- Restoring a database to a point in time: Database I want not showing up in dropdown list
- Mysql encrypted database contents by proxy?
- Force SQL Server database speed [on hold]
- Sphinx: Indexer causes MySQL to hang
- Restrict access to rows by usename
- DBCC checkdb showing failure after a restore
- Reverse connect by prior level value for arbitrarily-deep hierarchy
- SQL Server transaction log backups: test whether tail log follows last known log backup
- Managing constants changes to database
- Alter mysql database engine doesn't work from cluster to other
- How to disable flashback query logging for a specific table (Oracle)?
- MS Access: error 3045 while linking to back-end
- Security for Oracle linked server from SQL Server
- Statistical Analysis of Data that has to be done in an order?
- Record versioning and promotion
- Are there any disadvantages to partitioning on financial year?
- Problems in opening mysql query browser
- MongoDB: move documents before capping
- SUPER privilege not defined for master user in Amazon MySQL RDS
- How to import table's data in MySql from Sql Server?
- Is it possible to pipe the result of a mysqldump straight to rsync as the source argument?
- Deleting Data From Multiple Tables
- Minimizing Indexed Reads with Complex Criteria
Are duplicates ever acceptable in a database Posted: 25 Aug 2013 07:54 PM PDT A warehouse system I am working on allows inwards consignments [type = 0] and outwards consignments [type = 1]. The user selects multiple products for each consignment and then multiple warehouse locations for each product. The system needs to keep track of stock coming in and allow reporting and checking for available stock etc. When a consignment goes out, the system must select which products to use (based on first in first out or other custom logic). Products from an inwards consignment can go out on multiple outwards consignments. Products from an outwards consignment can come from multiple inwards consignments. I have designed the following table structure: products [id, description] locations [id, description] consignments [id, description, type] consignment_products [id, consignment_id, product_id] consignment_product_locations [id, consignment_product_id, qty, notes] consignment_outwards [id, consignment_product_location_id, consignment_product_id, qty] All the tables hold the info for both inwards and outwards consignments. consignment_product_locations.qty = the qty that the user selected for the consignment consignment_outwards.qty - the breakdown (detail) of the assigned inwards consignment product quantities.
(NB: this would make reporting harder) |
What to do after deleting a million records Posted: 25 Aug 2013 08:38 PM PDT We are implementing a new feature in our system that will cause about a million records (each record is tiny, basically a GUID, a date, and four
This will run every night at 1am, and will purge about a million records every time it runs. Is there anything I should be noting or doing for a table like this? Any properties I should put on the table, or any routines I should run regularly to "clean up"? I've never dealt with a table like this before. The table has a single clustered index (GUID + one of the |
MySQL 5.6 DateTime Incorrect datetime value: '2013-08-25T17:00:00+00:00' with Error Code 1292 Posted: 25 Aug 2013 03:45 PM PDT I'm using MySQL 5.6 and I have a program that runs the following SQL statement: UPDATE Unforutnately, I get the following error: Incorrect datetime value: '2013-08-25T17:00:00+00:00' for column 's_time' at row 1 The datatype for s_time is DateTime. I have already attempted to set the allow_invalid_dates property using the workbench. Can anyone understand and please explain this error to me? I know that if I manually change the statement to UPDATE Unfortunately, I cannot modify the program that supplies the SQL statement (which I'm told is valid by the creator of the program) and I also cannot understand what the +00:00 symbolises. Thanks |
MySQL / Fusion IO Configuration Question Posted: 25 Aug 2013 02:33 PM PDT I need a little configuration optimization. I think what I have is pretty good but feel like there's still room for improvement. Current Config Dedicated Database server specs
I know this kind of question really kind of requires eyes on the database server but I will gladly append information to this question as needed in order to get a much better configuration. Thanks for the help in advance. |
Posted: 25 Aug 2013 01:15 PM PDT We use SQL Server 2012 and its new Availability Groups (AG) feature. There is a task for moving old data of some tables from one database to another database. Both databases are included into different availability groups. Previously (before using the AG feature) the task was resolved by adding the second server instance as a linked server (sp_addlinkedserver) and executing a distributed transaction in the following way:
Unfortunately, distributed transactions are not supported for AG because databases may become inconsistent in case of failover (http://technet.microsoft.com/en-us/library/ms366279.aspx). Is there some way to implement this task with keeping the AG feature and without implementing the rollback logic in case of exceptions? |
Posted: 25 Aug 2013 12:45 PM PDT I am considering to develop a dynamic form so all the fields and validation type can be stored on the database. For example I could do table design something like this: I could then query the above table and generate HTML accordingly. The reason I am doing this because each network/category can have different fields or same fields. I also want Admin have ability to add more fields on the back-end so it will appear on the add-sale page. If user click on submit button to submit an order - how to populate all the fields in the It is going to be problematic - you have to make sure all the fields are already included in the |
Restoring a database to a point in time: Database I want not showing up in dropdown list Posted: 25 Aug 2013 12:58 PM PDT I want to restore to a point in time from a database that is already attached. When I try to do this using SMSS I cannot find the database I want in the dropdown list, but oddly it has a couple databases in the list that have been deleted. I tried closing/reopening SMSS in hopes that it would refresh but no such luck. Anyone know why the database is not showing up in the "From database:" drop down list? EDIT: If I try to restore a backup to a point in time then no backup sets appear for me to select. However, if I restore the backup to the most recent point in time and then restore the transaction log then I'm able to select a point in time when restoring the transaction log. But then when I query the database I find that it's still restored to the most recent point in time from the database backup rather than the time I selected for the transaction log. How do I restore to a point in time? The backups are using the full backup model, but I'm not sure if I should be doing anything else as part of the backup to be able to restore to a point in time. Any advice would be greatly appreciated! |
Mysql encrypted database contents by proxy? Posted: 25 Aug 2013 10:39 AM PDT Forive the amateur nature of this question. I work with Wordpress, often transforming the vanilla flavour into custom made versions for clients. Increasingly clients require more and more marketing information, which often includes entering traffic source conversions, contact data, measuring performance by users etc. Also the amount of lead generation that is happening on these sites increases too. I looked for some solutions, including encrypting informations but this would always mean i have to hack each and every plugin, and of course the wordpress core which is something i want to avoid and there seems to be the same problem, that decryption files would still be on the server giving a hacker the means to decrypt. (for easy solutions) So my question: Is there some better server based utility where i can simply specify the database path, username and password to some utility that encrypts and decrypts information from another database as data is passed to it? This would be a tidy way of encryping this information. I could specify a database elsewhere with much harder security and specify some sort of proxy database setting in the wp file. Any ideas? What i am looking for is a super clean and easy way for very very basic server admins to increase the encyption of data inside wordpress in a universal method, ie works for all datas, no need to modify any wordpress plugins or core etc. |
Force SQL Server database speed [on hold] Posted: 25 Aug 2013 12:30 PM PDT How can I optimize my database speed? Can I disable some functions for it? I'm using indexes and stored procedures. Is there more performance tools? |
Sphinx: Indexer causes MySQL to hang Posted: 25 Aug 2013 02:09 PM PDT Not sure what's going on. I run indexer --all --rotate When it finishes mysql hangs and not accepting new connections. from my observation as soon as indexer finishes, all *mysql tables are not corrupt *i'm using Percona mysql 5.6.12-56 *table in Innodb type *tried to install sphinx from source and rpm, also tried Sphinx 2.1.1 and Sphinx 2.0.8 processlist when it hangs: cat processlist-2013-08-25-11-52.log | wc -l 352 sphinx.conf every time i run indexer i get the following in /var/log/mysql.log Looks like others have this problem as well, but I don't understand how he solved that http://sphinxsearch.com/forum/view.html?id=11072 |
Restrict access to rows by usename Posted: 25 Aug 2013 11:32 AM PDT In my database i store data of different users (e.g. addresses or invoices) in the corresponding tables. I want to make sure that a logged in user only has access to it's own data in the tables (so that the user cannot read e.g. an address of an other user). Currently this is done in the application accessing the mysql server. Because the application will be split into multiple independent parts, written in different languages, I'm looking for a solution that is closer to the database, otherwise i need to make sure that the access rules are equal in every application. While I also have the alternative of a middleware on my list, I'm looking for a in database solution in the first place. Currently I already have a structure running in a test environment (It is a shorted version for illustrating). But I would like to know if there is a better solution to achieve this. I already know that that a Tables Views Constraints |
DBCC checkdb showing failure after a restore Posted: 25 Aug 2013 03:38 PM PDT We restored a backup into a new database. After running a few delete queries we received the following message:
I ran DBCC checkdb and saw numerous lines such as: and The final section of output was: My question is: Should I attempt to restore the database again or proceed with attempting to repair? I only hesitate because it is a rather large DB and restoring takes quite a while. UPDATE: a bit more info. |
Reverse connect by prior level value for arbitrarily-deep hierarchy Posted: 25 Aug 2013 02:39 PM PDT BackgroundUsing a menu hierarchy to drive a login process for users. Users have the ability to set their preferred menu item. When they log in, if they have a preferred menu item set, the system directs them to that item. If no preferred menu item is set, they log into the default menu item for their "most important" role. CodeThe query uses ProblemA root menu item has When the data is returned, the values for the This should actually return: However, since the hierarchy is Having the level reversed means we can ask, "What is the 3rd-level menu item for the user named 'USERNAME'"? Expressed in as a SQL where clause: QuestionHow would you reverse the value of For example, something like: Obviously that won't work because Fiddlehttp://sqlfiddle.com/#!4/60678/3/0 Strangely, I'm seeing different behaviours in Fiddle's 11g R2 instance than the local Oracle instance -- the ROWNUM is picking up "1" in the Fiddle when it should be picking up "3". This prevents seeing the menu hierarchy, and hence the Ideas
|
SQL Server transaction log backups: test whether tail log follows last known log backup Posted: 25 Aug 2013 04:23 PM PDT We are using SQL Server with full recovery mode. Given a full backup and a series of log backups, we would like to be able to check whether the log chain is complete from the last full backup to the current tail log. (Without actually restoring these backups; the purpose here is to test the consistency of the backups.) I already know how to do this for the existing backups: using RESTORE HEADERONLY I get the FirstLSN and LastLSN of every file, which can be compared for consecutive files, in order to determine whether they are compatible. However, I don't know how to check whether the tail log follows the last log backup. If I had the FirstLSN of the tail log, I could compare it to the LastLSN of the last log backup. But how can I obtain the FirstLSN of the tail log? I need a solution that works from SQL Server 2005 upwards (ideally using t-sql). So far, I have searched Google to no avail. Btw. I first posted this on stackoverflow; but migrated it here since it was flagged off-topic there. |
Managing constants changes to database Posted: 25 Aug 2013 11:52 AM PDT Here's the deal, I'm in charge of creating a website that will host small web apps. This apps will be photo contests, creatives sentences and other similar gibberish. The deal is, since apps we'll be constantly changing and some new ones will come up what's the best way to save the config specs for each one? Should I create new tables and fields as I need? Or should I serialize this data and put it on one field? In this case instead of having a table apps, I was thinking having a table for each app, this way I could have a more modular way of dealing with apps, making changes more easy, or more painfull (in case of the changes apply to all the apps) Or if I serialize everthing, then I would only have one table and I won't have the need to make changes to the table, but this option seems a bit lame to me. |
Alter mysql database engine doesn't work from cluster to other Posted: 25 Aug 2013 10:16 AM PDT I got into this issue when I tried to alter my database table engine to It did not throw any error but did not work. I just want to know why this did not work as I got it from For client requirement I may need to alter the engine on installed database without losing any data. Any Ideas or help that anybody can provide to resolve issue. |
How to disable flashback query logging for a specific table (Oracle)? Posted: 25 Aug 2013 09:16 AM PDT We have a specific table that has a lot of activity and it creates a lot of change records. The consequence is that the flashback data only goes back a couple of days. That is OK for many cases but it would be beneficial to have access to more historical data. We would like to either restrict logging on that table. Or disable it completely. I imagine that we may be able to do this by tablespace, I just have not found much on how to make these changes. |
MS Access: error 3045 while linking to back-end Posted: 25 Aug 2013 12:16 PM PDT In our environment multiple users each work with their own copied front-end, with tables linked to a single networked back-end (both files are .mdb). Each user has permissions to read and write to the location of BE, and both FE and BE are set to "shared access". Recently a single user started getting error: 3045 "Could Not Use Database; File Already in Use" while trying to open forms using linked data. The same error appears while trying to re-link the tables using the manager. All other users can still normally work. Tried restarting his workstation, removing local files and re-copying the FE and it didn't work. There is no .ldb file left after all users turn off the FE, compacting after that and restarting the FE of the unlucky user didn't help either. Said user is working on AC2010, but some others are working on AC2007 and AC2003. Please help! edit: BE is stored on Windows Server 2003. |
Security for Oracle linked server from SQL Server Posted: 25 Aug 2013 03:16 PM PDT I'm trying to make the linked server from SQL Server 2005 to Oracle more secure by not having user/pass on "Be made using this security context:" fields so only a specific user can access the data from linked server. So I tried mapping SQL Security User to Oracle user/pass by inserting it in "Local server login to remote server login mappings:" area then "For a login not defined in the list above, connection will:" set to "Not be made". When I click OK, I get: Is it not possible to map SQL login to a Oracle login? Is there any way to get this to work? |
Statistical Analysis of Data that has to be done in an order? Posted: 25 Aug 2013 09:16 PM PDT Bear with me - that is the first time try that in SQL Server, normally I have been doing that on the front end ;) I a implementing some analysis on time coded data series. This is not super complicated stuff, but some of it requires some numbers we do not store in the database and that has to be calculated by aggregating the numbers in a specific algorithm IN ORDER. To give an example:
This can not be pre-calculated due to dynamic filtering - there are a number of filters that can be applied to the data. So far - past - I pulled the data to the application, now for the standard stuff I plan to try to keep that in the sql server. My problem now is - I can see how that works (acceptable) in SQL Server: But if I put that into a view... and then filter out rows, the Sum is still calcualted from the beginning. And I need a view because I want (need) to map that standard analysis data into an ORM (so dynamic SQL is out). Anyone an idea how to do that? |
Record versioning and promotion Posted: 25 Aug 2013 06:09 PM PDT Let's say we have this hierarchy: Now, this will not reflect real life completely, but.
3.Each city can modify laws. After they modified the law, it is available to the city itself and to the regions within the city. After 3-step approval process, it can become a country level law, and all children will inherit new version of the law. Each city still can modify new version of a law to have changes.
Currently we solve this issue like this: We have table Because of this we have to use Table-valued functions in our application to filter specific records that are relevant to the current location. (Some laws can come from country level, some laws come from city level, some laws come from organization level). This makes database much slower. We cannot use native unique constraints on our fields, because all fields must be unique only within a branch, so we have to use custom check constraints to keep data clean. Also, insertion and editing records is much slower, since we have to promote some records to a country level, and then apply data transformation for other countries. Currently we have around 150 tables, and we use a lot of joins. I'm not sure what's the best way to model complex hierarchical data with versioning. (I'm thinking we need an enchanced version of source control..) |
Are there any disadvantages to partitioning on financial year? Posted: 25 Aug 2013 01:16 PM PDT Our current set up has one table per financial year (May 1- April 30). Each table has approx 1.5 million rows. We have about 8 years of data, and will obviously be adding each year. The majority of queries are within the financial year/one partition. Either My plan is to have a range partition on an InnoDB table. e.g. This means that the PK has to become Are there any significant disadvantages to partitioning compared to having an unpartitioned table? I know that means the PK is now length 12 and all further indexes will have that prepended to it. Does that make a difference? The table needs to work faster on reads than writes, and there are a fair few indexes on it.
We do sometimes need to query the time across all time or over "the last X months", but this is pretty rare. The main advantages of moving to a single table is to eliminate the logic in the application working out which table to insert/update/select and not needing to calculate unions in those situations where we need more than one table. |
Problems in opening mysql query browser Posted: 25 Aug 2013 08:16 AM PDT My mysql query browser doesn't run. I just started to work with xampp and php. So I just installed xampp in my local machine. Now I have a problem in opening mysql query browser... What I'm getting while trying to open mysql is: Click the Can anyone help me to fix out the above problem? |
MongoDB: move documents before capping Posted: 25 Aug 2013 11:16 AM PDT The cappedCollection concept works well for most of my projects where cleaning old data without care makes sense. For another projects, I need a more complex and safe concept. The requirement is nearly the same as Similar to the logrotate rules, I'd like the main collection not to become too large => capped by size; if possible, capping by timestamp might be a plus. This sounds like a cappedCollection, but I do not want any data loss when it's capped. The old data should be stored into another db's collection that must be Example:
Before removing old documents, these are savely moved into No data is lost and the main collection remains small and fast. Storing the data in another database avoids db locks, e.g. repairDatabase tasks on an archive file will not affect or delay the main collection. Is there a good practice or how to achieve this - as reliable and automated as possible - without writing all the data transfer for a cronjob which handles the data transfer but should never ever be missed because data is lost if capping starts before old data is copied into the archive. |
SUPER privilege not defined for master user in Amazon MySQL RDS Posted: 25 Aug 2013 04:16 PM PDT I have created one medium instance on amazon rds in asia pecific (singapore) region. i have created my master user with master password. and it is working/connecting fine with workbench installed on my local PC. When, I am going to create function on that instance, it show me following error
At my instance, my variable (log_bin_trust_function_creators) shows OFF. now when I go to change with variable using it gives me another error
I don't know how to solve this error. Can anybody help??? |
How to import table's data in MySql from Sql Server? Posted: 25 Aug 2013 07:16 PM PDT I am trying to export table from SQL Server 2008 R2 TO MySql 5.5. For this I am using Here this error may be occurring because table in Sql Server has a column with data type Please provide your expert answers. If not possible through |
Is it possible to pipe the result of a mysqldump straight to rsync as the source argument? Posted: 25 Aug 2013 05:16 PM PDT Is it possible to pipe the result of a mysqldump straight to rsync as the source argument? Conceptually, I was thinking something like: I've seen people pipe the result to mysql for their one liner backup solution, but I was curious if it was possible with rsync. You know--- cause rsync is magic :) Thanks for your time! |
Deleting Data From Multiple Tables Posted: 25 Aug 2013 02:16 PM PDT Suppose,I've a table called UNIVERSITY containing universities name: Now these universities ID's has been(obviously) used in many tables within the database(name e.g.Education),Suppose 10 tables. Q.Now what happen if i delete one university? A.The universityID field in other tables becomes NULL. But I don't want these,rather when I delete 1 university from UNIVERSITY TABLE,all its occurrences with Rows in all 10 table should get deleted. What will be the shortest and easiest MySQL Query for this operation. NOTE:I'm using PHP language. |
Minimizing Indexed Reads with Complex Criteria Posted: 25 Aug 2013 06:16 PM PDT I'm optimizing a Firebird 2.5 database of work tickets. They're stored in a table declared as such: I generally want to find the first ticket that hasn't been processed and is in My processing loop would be:
Nothing too fancy. If I'm watching the database while this loop runs I see the number of indexed reads climbs for each iteration. The performance doesn't seem to degrade terribly that I can tell, but the machine I'm testing on is pretty quick. However, I've received reports of performance degradation over time from some of my users. I've got an index on -- Edits for comments -- In Firebird you limit row retrieval like: So when I say "first", I'm just asking it for a limited record set where |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment