[how to] Update oracle sql database from CSV |
- Update oracle sql database from CSV
- SQL Server Restore from one database to another
- MongoDB: move documents before capping
- How can I improve my table design for different types of an entity?
- FOR XML is generating a empty first line
- Identifying which values do NOT match a table row
- Differences Between Two Different Create Index Commands
- MySQL Pivot Table Naming Conventions
- SQL Server TDE Stuck at Encryption_State 2 and 0% on a Tiny DB
- Optimize Query Execution based on two datetime columns (MySQL)
- Invalid rowid error
- Postgres RIGHT JOIN with custom array
- How can I tell if a SQL Server backup is compressed?
- Run Multiple Remote Jobs
- Stored Procedures under Source Control, best practice
- Secure Linked Server - Non privledged user possible? Registry corruption?
- Separate SQL Server data from schema
- Booking system structure
- Reindexing and its effects on SQL cache
- SQL Server 2008 search joining query
- Unique index on 2 columns in mysql
- loading a csv file which is on local system in to Mysql DB which is on remote server
- Why would I NOT use the SQL Server option "optimize for ad hoc workloads"?
- Avoiding performance hit from GROUP BY during FULLTEXT search?
- AWS performance of RDS with provisioned IOPS vs EC2
- sp_startpublication_snapshot Parameter(s)
- Binlog has bad magic number
Update oracle sql database from CSV Posted: 26 Mar 2013 08:43 PM PDT I tried google at first but no luck. Is it possible to update tables from csv file? I am using SQL developer and I am using a script to export edited rows to a csv file. I would like to update the edited rows through that csv file from a client. I don't want to import the whole file as the client already has a mirror table, I just would like to update the data from what it is in the csv file. Is this possible? If not what would be the best approach? |
SQL Server Restore from one database to another Posted: 26 Mar 2013 05:58 PM PDT One of our devs backed up a dev database, and then restored it in production. It's a new database for a new app that was deployed last night intentionally to prod. Now in the backupset table (msdb.dbo.backupset) on prod, I can see a record for the dev database, with a backup start date time of when the restore was done. Record from prod backupset table..... name: DatabaseName_UAT-Full Database Backup server_name: COMPNAME-SQLDEV02 machine_name: COMPNAME-SQLDEV02 I would not expect to see this record.... can anyone explain why restore would insert into the backupset table on prod? Can I delete this record from the msdb.dbo.backupset table? Or not such a good idea? Thanks heaps. |
MongoDB: move documents before capping Posted: 26 Mar 2013 03:51 PM PDT The cappedCollection concept works well for most of my projects where cleaning old data without care makes sense. For another projects, I need a more complex and safe concept. The requirement is nearly the same as Similar to the logrotate rules, I'd like the main collection not to become too large => capped by size; if possible, capping by timestamp might be a plus. This sounds like a cappedCollection, but I do not want any data loss when it's capped. The old data should be stored into another db's collection that must be Example:
Before removing old documents, these are savely moved into No data is lost and the main collection remains small and fast. Storing the data in another database avoids db locks, e.g. repairDatabase tasks on an archive file will not affect or delay the main collection. Is there a good practice or how to achieve this - as reliable and automated as possible - without writing all the data transfer for a cronjob which handles the data transfer but should never ever be missed because data is lost if capping starts before old data is copied into the archive. |
How can I improve my table design for different types of an entity? Posted: 26 Mar 2013 05:30 PM PDT Consider an accounting system as an example. I have an Entity called Currently, I come up with the following design: But I don't think my design is efficient enough (or even correct and free of errors). What would you suggest? Also, if this is important in any way, I am planning to utilize MariaDB. |
FOR XML is generating a empty first line Posted: 26 Mar 2013 07:09 PM PDT I'm parsing with flash a XML file generated by this code: But I get a XML file with the first line empty. Removing this first empty line in the generated XML works ok with flash, but with the generated XML no. How can I remove that line? Is my script wrong? I have no much idea about this code. I'm running SQL Server 9.0. |
Identifying which values do NOT match a table row Posted: 26 Mar 2013 04:26 PM PDT I would like to be able to easily check which unique identifiers do not exist in a table, of those supplied in a query. To better explain, here's what I would do now, to check which IDs of the list "1, 2, 3, 4" do not exist in a table:
I'm thinking there's got to be a better way to do this. I'm looking, ideally, for something like
|
Differences Between Two Different Create Index Commands Posted: 26 Mar 2013 01:43 PM PDT Are there differences between these two scripts? Or would all of the extra tokens/attributes (ie: 1st script: 2nd script: FYI: there is ETL code that drops the index with this script before doing a bulk data load, and then finally applying re-creating the index with the 2nd script above. EDIT: After applying the simple index above (2nd script), I got this: SQL Server Management Studio > expanded table > expanded folder "Indexes" > right click index > selected "Script Index as.." > selected "CREATE TO" > selected "New Query Editor Window" > got the following. So it appears that these are added in addition by running the simple statement: SORT_IN_TEMPDB = OFF DROP_EXISTING = OFF ONLINE = OFF |
MySQL Pivot Table Naming Conventions Posted: 26 Mar 2013 01:03 PM PDT I have some tables (users, jobs, customers) that are each tied to 'groups'. As each table is linked to 'groups', I feel inclined to call the actual group tables user_groups, job_groups and customer_groups; However, if the tables were just 'users' and 'groups', my pivot table would normally be called 'user_groups'. How would you name these tables?
If I end up with something like user_groups_pivot, would it be acceptable to call other pivot tables something like: users -> user_roles (the pivot) -> roles rather than users -> user_roles_pivot -> roles? I'm very fond of the idea of names being predictable. |
SQL Server TDE Stuck at Encryption_State 2 and 0% on a Tiny DB Posted: 26 Mar 2013 03:12 PM PDT I'm having a heck of a time with this issue and can't figure out what is wrong. I'm not sure how long this tiny DB has been in 'encryption_state = 2' from query: but it won't budge past 0%. State 2 means that it is currently trying to encrypt the DB. No other DB has any issues, encrypted or unencrypted. Command: Result:
Running: actually returns a command completed, but percentage still stays at 0%. There's no DB corruption from Thanks. Edit & update: Restored to a diff server with the cert, it goes to 1.187865% for 'percent_complete' then immediately reverts back to 0% in about 1 seconds time. Firing up profiler now to catch something in the background perhaps, checking to see if extended events would help. Oh boy, profiler shows error 824 suspect DB page. |
Optimize Query Execution based on two datetime columns (MySQL) Posted: 26 Mar 2013 12:22 PM PDT I've been struggling all day long against this. So I've got a very busy database (still on development) and we've got records being inserted very frequently. The record logs have a start time and a end time. So if I want to select something between col1(datetime) and col2(datetime) mysql can't use indexes properly because it will search the indice for col1 but will never look into col2. The database engine is INNODB. What happens is for example that mysql will search 80 thousand rows when the interval requested should only return two rows. My biggest problem is that I'm trying to do some aggregate functions on this time ranges and its taking a very long time when it should be really fast considering how many rows it is actually counting. Also note that i cant do dateStart between col1 and col2 nor dateEnd between col1 and col2 because dateStart can be lower than col1 and dateEnd can also be lower than col2. Lets assume this sample data: If I need the rows that range between 10:21:30 and 10:25:30 I need to do something like this: '10:25:30' <= col1 AND '10:21:30' >= col2. So how do I index this columns properly? Mysql only picks up one of the date columns in the indice. Thanks in advance |
Posted: 26 Mar 2013 01:01 PM PDT I'm trying to see, how UPDATE Lock helps to minimize error while dml (delete/update) operations. I'm using very simple code to check, how does it works. But, it showing Invalid ROWID. Can anyone help me? |
Postgres RIGHT JOIN with custom array Posted: 26 Mar 2013 05:46 PM PDT I'm using Postgres 9.1 and want to get a result with some blanks where there is no data. My query looks like the following: The ... is not important to this question, it's just important that it returns a result with the institution_ids in the array (1, 3, 4, 5, 7, 9) and it includes those institutions with no data. Here is an example of the current output: An example of the output I want is: I know I can achieve this by using the following query: However, this is extra work because generate_series(1,9) creates institution_ids I'm not interested in, it requires that I know the max institution_id a priori, and it introduces an unnecessary WHERE clause. Ideally I'd like a query like the following: Where Any ideas? |
How can I tell if a SQL Server backup is compressed? Posted: 26 Mar 2013 10:52 AM PDT We have recently upgraded from SQL Server 2005 to SQL Server 2012. Under SQL Server 2005 there is no option to create compressed backups as there is in 2012. If you attempt How can I tell if an existing backup file is initialized for compressed backups? |
Posted: 26 Mar 2013 02:04 PM PDT I need to manually run a job on more than 150 sql server instances (sql server 2000, remote) from a sql server 2005 instance (the local server). The job is the same on all these instances. The job just calls a stored procedure without parameter, which is also the same across all the instances. These jobs are on a schedule. But now they want me to manually run the job for all the instance or for specified instances upon request. What is the best practice for this? I have tried openrowset to call the remote stored procedure. But each run of the job takes couple of minutes, so if I use a loop to run all these jobs, it will run one by one and that's a long time. Ideally, it should be able to run the stored procedure on each instance without waiting for it to finish. More ideally, it should be able to run the job on each instance without waiting for it to finish, so it can leave a record in the job history on each instance. And the stored procedure is from a third party so it can't be altered. |
Stored Procedures under Source Control, best practice Posted: 26 Mar 2013 04:13 PM PDT I am currently using Tortoise SVN to source control a .NET Web Application. What would be the best way to bring our SQL Server stored procedures into Source Control? I am currently using VS 2010 as my development environment and connecting to an off-premise SQL Server 2008 R2 database using SQL Server Data Tools (SSDT). What I have been doing in the past is saving the procs to a .sql file and keeping this files under source control. I'm sure there must be a more efficient way than this? Is there an extension I can install on VS2010, SSDT or even SQL Server on the production machine? |
Secure Linked Server - Non privledged user possible? Registry corruption? Posted: 26 Mar 2013 04:39 PM PDT Is it possible to use a non privledged Windows domain account to impersonate itself in a linked server? And why would it be unable to read the registry for available network protocols? Overview: Only way I am able to have a scheduled job utilize a linked server is when the local account is mapped to a remote SQL account. Unable to use 'Impersonate.' Details:
Within the named instance (Server_A_Named):
In SAN-B, I've used SQL Nativue Client 10.0 + OLE DB Provider for SQL Under the Security for SAN-B, I have 3 accounts:
For logins not defined, connections will not be made. As ADuserSysAdmin, I can click on test connection and it works. Only way to get linked server to work for NonPrivADuser is to have it map to a local SQL account on Server_B NonPrivADuser has access on Server_B's database as well. This is the error that NonPrivADuser receives while trying to access the linked server using 'impersonate': Executed as user: DOMAIN\NonPrivADuser. SQL Server Network Interfaces: Error getting enabled protocols list from registry [xFFFFFFFF]. [SQLSTATE 42000] (Error 65535) OLE DB provider "SQLNCLI10" for linked server "SAN-B" returned message "Login timeout expired". [SQLSTATE 01000] (Error 7412) OLE DB provider "SQLNCLI10" for linked server "SAN-B" returned message "A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.". [SQLSTATE 01000] (Error 7412). The step failed. I fired up procmon on Server_A while trying to use the linked server, SAN_B. SQLAGENT.EXE can read HKLM\SOFTWARE\Microsoft\MSSQLSERVER\Client\SNI10.0 SQLSERVR.EXE receives a BAD IMPERSONATION on the same key. I fired up regedit and 'users' has read permissions on that key. |
Separate SQL Server data from schema Posted: 26 Mar 2013 05:37 PM PDT I am facing a kind of strange request: We have application installations world wide. One of the countries that we wish to do business in has some strict laws regarding the handling of the data, such that it would be advantageous for us to store the data within the boundaries of that country. So far, nothing weird, right? We will have an instance of our SQL Server database hosted within the borders of said country. Here's the part that is outside of my knowledge: Someone in management heard that some other firms do this by keeping the schema and indexes in a local location, but storing the data in an encrypted form in the other country. The only thing that I could think of that might support this would be to put the tables that contain sensitive data in a separate file or file group from the rest of the database. However, in this case, there would be an ocean between the file and the server! I can't imagine that we would get good performance from this sort of arrangement. Is there anyone out there who has had experience with this sort of request? What technologies can I look at to accomplish this? |
Posted: 26 Mar 2013 09:31 AM PDT I am making a room booking system for a friends business. Background: 3 rooms, multiple clients, bookings available 9-5, bookings last 1 hour. For the database, is this too simple? Booking record table
Client table
Room table
Thanks for your help Nathan. |
Reindexing and its effects on SQL cache Posted: 26 Mar 2013 01:37 PM PDT Currently looking into reindexing our sql database, and cannot find any information on whether or not the query cache would be effected. Any help or information would be great. We are using SQL Server 2005 as our DBMS. |
SQL Server 2008 search joining query Posted: 26 Mar 2013 09:03 PM PDT I have two table as one is the message table and another one is messageUser table. Now i need to check before insert a new row. for example,
if a member create a new message, i would like to check the user in this message is there exist before. If yes, then attach the message to previous conversation else create a new conversation.
I've tried before using UNION, IN for the checking query but basically is just get back all the list. Is there anyone can give me a help? Thanks. UPDATE I can solve scene 1 by using but failed to suit scene 2 |
Unique index on 2 columns in mysql Posted: 26 Mar 2013 03:49 PM PDT I have one table in mysql named here is the schema of the table ( now, I want to create unique index on example : now i am inserting How do i prevent this? |
loading a csv file which is on local system in to Mysql DB which is on remote server Posted: 26 Mar 2013 12:17 PM PDT Can we directly load a CSV file ( which is on the local system) on MYSQL DB ( which is installed on the Remote server ) ? 'load data infile into table name' command can only be used for loading in local system only. |
Why would I NOT use the SQL Server option "optimize for ad hoc workloads"? Posted: 26 Mar 2013 09:43 AM PDT I've been reading some great articles regarding SQL Server plan caching by Kimberly Tripp such as this one: http://www.sqlskills.com/blogs/kimberly/plan-cache-and-optimizing-for-adhoc-workloads/ Why is there even an option to "optimize for ad hoc workloads"? Shouldn't this always be on? Whether the developers are using ad-hoc SQL or not, why would you not have this option enabled on every instance that supports it (SQL 2008+), thereby reducing cache bloat? |
Avoiding performance hit from GROUP BY during FULLTEXT search? Posted: 26 Mar 2013 08:51 AM PDT Is there any clever way to avoid the performance hit from using group by during fulltext search? In this example it's fetching the lowest post_id for unique topic_ids that match the text. With the group by to find the min, it's taking 600ms in a million row database, with about 50K rows examined. If I remove the MIN but leave the GROUP BY, it's the same slowness, so it's the GROUP hit. I suspect this is because it can only use one index, the fulltext ? Without the GROUP BY it's 1ms so this has to be filesort speed? (I've removed ORDER BY and everything else to isolate where the hit is) Thanks for any insight and ideas. (using MyISAM under mariadb if it matters) |
AWS performance of RDS with provisioned IOPS vs EC2 Posted: 26 Mar 2013 10:22 AM PDT Has anyone done a performance comparison of AWS RDS with the new provisioned IOPS vs EC2? I've found plenty of non-high IOPS RDS vs EC2 but nothing with the new high IOPS feature in RDS. |
sp_startpublication_snapshot Parameter(s) Posted: 26 Mar 2013 02:51 PM PDT I am creating a stored procedure that:
Here is my new brick wall... On this DB server, I have a 'shell' db that they will be running the SP from, that has a history table so I can keep track of who created/deleted databases using my SP's... The only parameter for sp_startpublication_snapshot is @publication... I can give it the publication name, but since I am not running it from the publishing database, how do I specify the publishing database? i.e.: the publication shows up as: but I am running the script from the database [WC_QACatalog] Any ideas about how to accomplish this? Thank you, Wes |
Posted: 26 Mar 2013 08:51 PM PDT I keep getting this error whenever I start MySQL. I have already tried this. I have an EasyPHP 12.1 setup on Windows 7x64 PC. |
You are subscribed to email updates from Recent Questions - Database Administrators - Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment