[how to] Are there any problems with deploying an SQL Server database to a production server by taking a backup? |
- Are there any problems with deploying an SQL Server database to a production server by taking a backup?
- psql, record separators in the data
- why is this query taking so long?
- I need to do test case on CDP (corporate data program) CARD create CDP Staging tables and incorporate into extract process [on hold]
- Joining data from 2 different servers
- How to index for a group by with columns from different tables?
- Benefit to keeping a record in the database, rather than deleting it, for performance issues?
- Looking for performant database-design for large data sets modeling availability problem [on hold]
- CHECKPOINT prevents database drop
- When to make changes to the cost threshold for parallelism
- how to configure the mysql master&slave to backup master to slave?
- opinion about the migration to hibernate orm [on hold]
- Database design with multiple tables
- Why my query works on VIEW and doesn't work on a similar table?
- Recovering a dropped column?
- Data center from scratch which way to go
- Selecting with multiple ands
- How to -create- performance issues? - oracle [on hold]
- Access denied when disabling agent job, despite SqlAgentOperator membership
- pg_dump format => custom vs sql
- How to make Postgres autovacuum not impact performance?
- Why doesn't running a full sample *always* improve query performance
- How to run a SELECT query within while loop in PHP?
- Oracle 11g http listener configuration for PL/SQL server pages
- MySql one time event never runs?
- Getting 'specified network password is not correct' when trying to change password
- Slow SSRS Report in production
- How to run a cold backup with Linux/tar without shutting down MySQL slave?
Posted: 02 Sep 2013 08:35 PM PDT This is a slightly loaded question in that I have already assumed that the described scenario is wrong. A DBA is deploying an application I have written that includes an MS SQL Server 2008 database. He has asked me to take a database backup from my development machine so he can restore it to the production server, thus deploying it. This is a greenfield deployment so there is no existing data to be migrated. I was expecting to provide a DDL script, which I have diligently tested and ensured that it contains everything required. If I execute it in SSMS, the database is created in one click. To me, using the backup facility for deployment does not seem right, but without being an expert in SQL server I can't think of a solid reason not to do it. I would have thought, for example, that there would be some 'contamination' of the database from the development machine - perhaps the computer name, directory structure or user names stored in there somewhere. Is this the case, or is backup and restore a valid deployment technique? |
psql, record separators in the data Posted: 02 Sep 2013 08:08 PM PDT I want to use psql to list all of the databases on a Postgres server, to be parsed by a script. This command lists them: but the output shows an obvious issue: the records are separated by newlines, but also contain newlines. postgres=CTc/postgres Using the -R option I can change the record separator, but it seems like no matter what I change it to, there's the risk of that string appearing in the data. Is it possible to instead tell psql to replace the newlines in the data with something else? (and then what if that string also appears in the data?) I'd also tried to set the record separator to a null character with such sequences as The other option I know of to list all databases is: but that requires me to give the password for the postgres user, so it's not desirable. Perhaps there's another way to get a list of the names of all databases? |
why is this query taking so long? Posted: 02 Sep 2013 07:34 PM PDT +----+-------------+-----------+----------+---------+------+----------+--------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+-------------+-----------+----------+---------+------+----------+--------------------------------------------------------+ | 39 | tigase_user | localhost | tigasedb | Query | 5406 | updating | delete from tig_pairs where uid >= 2 and uid <= 700000 | | 40 | tigase_user | localhost | tigasedb | Query | 0 | NULL | show processlist | +----+-------------+-----------+----------+---------+------+----------+--------------------------------------------------------+ It just stays in the 'updating' state for most of the 5406 seconds. Mysql version Server version: 5.1.69-0ubuntu0.11.10.1-log (Ubuntu) thanks |
Posted: 02 Sep 2013 04:39 PM PDT Requirements:The CDP source tables are not stored in BFS_LOAD_STAGING. They are written directly to CDP_STG. This is because the cards files are deltas and we need to maintain the full dataset somewhere and CDP_STG schema was chosen as the operational store area CDP CARD Solution:
Testing:
|
Joining data from 2 different servers Posted: 02 Sep 2013 01:57 PM PDT It is possible to host my databases on 2 different servers? For example if I store my comments on 'heroku' and store users on some other server it is possible to join the two servers together? |
How to index for a group by with columns from different tables? Posted: 02 Sep 2013 01:46 PM PDT Is it possible to create an index from columns of 2 different tables? For example: the following statement Uses indexes if I have for (p.name,p.lastname) and for |
Benefit to keeping a record in the database, rather than deleting it, for performance issues? Posted: 02 Sep 2013 02:57 PM PDT So I have a client that I am building a Rails app for....I am using PostgreSQL. He made this comment about preferring to hide records, rather than delete them, and given that this is the first time I have heard about this I figured I would ask you guys to hear your thoughts.
In all the Rails apps I have built, I have never had an issue with records being deleted and it affecting the index. Am I missing something? Is this a problem that used to exist, but modern RDBMS products have fixed it? |
Looking for performant database-design for large data sets modeling availability problem [on hold] Posted: 02 Sep 2013 05:37 PM PDT I am looking for a performant database-design to store and query information about location disposability and other attributes of a shared "good" in a online-sharing community to be designed (based on drupal + mysql). Due to a NDA I'll take a care sharing community as example: Users can rent cars provided by other users. The cars have attributes and disposability that needs to be stored and queried. Processing general attributes and locations is not the problem, also the proximity is not that difficult, but querying all available cars seems to be tough. A few examples for possible queries:
Cars that aren't available should not appear in the query result, but cars that are available in the query range (but maybe not on other times/dates) should appear, because they are available to rent. The project will be huge. There will be some hundred thousands users and cars and millions of contracts. Any ideas how to store the availability or dis-availability in Database? One idea is to use an RDBMS (MySQL) with three database tables, one for the:
The date/time where cars aren't available could be split in to one line per day and contract, like (BOOKING_ID in this example matches the query number of above query example; the booking table will refer to car table so all information is connected): To query for available cars I "just" need to query for the general attributes and join above table and check that there are no bookings in desired period of time. The Downside is, that when there are many cars and bookings and even more not-available dates the joins will be really large and I guess it will get very slow soon. I am not sure if the concerns about SQL (MySQL) are justified or is it just paranoia triggered by the increased popularity of NoSQL DBMS? Although the drupal-based website primarily uses MySQL, an alternative approach might be to duplicate the information for cars and bookings using MongoDB and have a document for each car including the booking information, like below example. I am not sure if this example it is correct and nigher how to query it efficiently, as I've not jet worked with MongoDB. The idea behind using MongoDB came up while I was thinking about using ApacheSOLR to search for available cars. While Solr could be seen as document based storage I guess MongoDB would be similar but maybe with less overhead, because there is no text-based search required. I think on both approaches I'll run a daily jobs to clean out non relevant (outdated past booking) data to keep tables/collections as small as possible. Would be nice to get answers an comments about above approaches. Will one or the other work fine? Are there better ways/models to handle this problem? |
CHECKPOINT prevents database drop Posted: 02 Sep 2013 01:28 PM PDT I'm working in the following scenario: I'm creating an script that recreates an entire database. The script involves destruction an recreation of the database and all its objects, as well as a great number of insertions from linked servers. The idea is to be able to recreate the database at will until the development phase ends. My problem is that the script generates high log activity, and soon a CHECKPOINT occurs. I'm killing all connections in my script prior to drop the database, but as CHECKPOINT is a system process I can't kill it, and my script fails. I don't need recovery for this database at this stage, my question is if I can disable the logs for this database to avoid the CHECKPOINT. |
When to make changes to the cost threshold for parallelism Posted: 02 Sep 2013 10:35 AM PDT While examining a performance issue , I have seen an influx on CXPACKETS suggesting I might need to look at the cost threshold for parallelism and perhaps the MAXDOP. Before making any drastic changes to the MAXDOP I have following the advice of many others including that of @mrdenny in the answer to CXPACKET Waits performance tune for SQL Server 2008 and @aron-Bertrand 's answer from Dealing with CXPACKET waits - setting cost threshold for parallelism. I have added to the maintenance to update the statistics fully on a nightly basis. This feels like a sensible move. However, making modifications to the cost threshold is still something which niggles me. At what point should the cost threshold for parallelism be altered? Does any one have an example of where (after examining the cost of their queries and workload) they made change to this cost? Apologizes if this is something which which has been answered in a previous question. Thanks! |
how to configure the mysql master&slave to backup master to slave? Posted: 02 Sep 2013 07:20 PM PDT I have compelet backup one database from master to slave now i want to back up two databases from master to slave .but use the same ways it doesnt work,what should i do thank u foe you help? I've complete backup one database between master & slave,now I want to add another database to backup? How Can I do that? the master's my.cnf and this is my slave's conf [mysqld] master configureserver-id = 1 master configuredatadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql Disabling symbolic-links is recommended to prevent assorted security riskssymbolic-links=0 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid and this is my slave conf [mysqld] configure master-slaveserver-id=2 configure master-slavedatadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql Disabling symbolic-links is recommended to prevent assorted security riskssymbolic-links=0 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid and the backup method is like this; in master i do this mysql> GRANT REPLICATION SLAVE ON . TO slave@192.168.74.236 IDENTIFIED BY 'hello'; mysql> GRANT FILE,SELECT,REPLICATION SLAVE ON . TO slave@192.168.74.236 IDENTIFIED BY 'hello'; mysql> FLUSH TABLES WITH READ LOCK; then cd /var/lib/mysql and tar -cvf data.tar web_db just_test (web_db & just_test is the data fold which i want to back up) the scp data.tar to slave and tar -xvf and .... back to master and do mysql> UNLOCK TABLES; then restart mysqld service both in master & slave the run the cmd flowing in master mysql> SHOW SLAVE STATUS\G mysql> SHOW MAster STATUS\G then run following cmd in slave [root@Slave mysql]# mysql -h 127.0.0.1 -u root -phello Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> CHANGE MASTER TO MASTER_HOST='192.168.74.235',MASTER_USER='slave',MASTER_PASSWORD='hello',MASTER_LOG_FILE='mysql- bin.000001'; and this is the slave status ************* 1. row ************* Slave_IO_State: Waiting for master to send event Master_Host: 192.168.74.235 Master_User: slave Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000001 Read_Master_Log_Pos: 106 Relay_Log_File: mysqld-relay-bin.000002 Relay_Log_Pos: 251 Relay_Master_Log_File: mysql-bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: webdb,just_test Replicate_Ignore_DB: mysql Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 106 Relay_Log_Space: 407 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: 1 row in set (0.00 sec and this is the error log 130829 15:39:49 [Note] Error reading relay log event: slave SQL thread was killed 130829 15:39:49 [Note] Slave I/O thread killed while connecting to master 130829 15:39:49 [Note] Slave I/O thread exiting, read up to log 'FIRST', position 4 130829 15:39:51 [Note] 'CHANGE MASTER TO executed'. Previous state master_host='192.168.74.235', master_port='3306', master_log_file='', master_log_pos='4'. New state master_host='192.168.74.235', master_port='3306', master_log_file='mysql- bin.000001', master_log_pos='4'. 130829 15:39:56 [ERROR] Slave I/O: error connecting to master 'slave@192.168.74.235:3306' - retry-time: 60 retries: 86400, Error_code: 2013 130829 15:39:56 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.000001' at position 4, relay log './mysqld-relay-bin.000001' position: 4 130829 15:41:56 [Note] Slave I/O thread: connected to master 'slave@192.168.74.235:3306',replication started in log 'mysql- bin.000001' at position 4 130829 15:42:20 [Note] Error reading relay log event: slave SQL thread was killed 130829 15:42:20 [Note] Slave I/O thread killed while reading event 130829 15:42:20 [Note] Slave I/O thread exiting, read up to log 'mysql-bin.000001', position 106 130829 15:42:23 [Note] 'CHANGE MASTER TO executed'. Previous state master_host='192.168.74.235', master_port='3306', master_log_file='mysql-bin.000001', master_log_pos='106'. New state master_host='192.168.74.235', master_port='3306', master_log_file='mysql-bin.000001', master_log_pos='106'. 130829 15:42:27 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.000001' at position 106, relay log './mysqld-relay-bin.000001' position: 4 130829 15:42:27 [Note] Slave I/O thread: connected to master 'slave@192.168.74.235:3306',replication started in log 'mysql- bin.000001' at position 106 what should i do ? my english is poor,maybe i dont explain my meaning.Thank U for ur help!!!! |
opinion about the migration to hibernate orm [on hold] Posted: 02 Sep 2013 11:41 AM PDT We are investigating the use of Hibernate ORM as a replacement to traditional DAO using query in a java erp system, currently we have:
And we have the following questions:
|
Database design with multiple tables Posted: 02 Sep 2013 12:53 PM PDT I have a data structure as follows: Two main components: Projects and Results So I have a table called Project which has multiple fields as Id, Name, Title, StarDate, EndDate and many more which correspond to a project. And each project can have many Results. So first I thought to make a results table but there is a problem. I have many types of results (around 15) each of them not having too many common fields. So then I thought to make a separate table for each type of result but then it will be very hard to get all the results for a project because they can be in any of the result tables. So to be more clear I have the following data: For Project: ProjectID, UserID, Type, Title, Summary, StartDate, EndDate, Value, Website For results I have the following types:
How can I structure the database for it to work in the best way. |
Why my query works on VIEW and doesn't work on a similar table? Posted: 02 Sep 2013 10:12 AM PDT I have a view openstreetmapview that combines two tables with different columns. The first table is openstreetmap and the second is B. When I execute this query on the view: Then, I get around 50k results. When I execute the same query on the table even though the table has the same column that the query needs, it returns 0 rows. Why is that? I use PostgreSQL 8.4 database and PostGIS . |
Posted: 02 Sep 2013 09:45 AM PDT I have a PostgreSQL table: I inserted a few rows, then ran: How can I recover the data? I'm willing to use any means. |
Data center from scratch which way to go Posted: 02 Sep 2013 11:05 AM PDT My budget for a data center (hardware,OS licences, SQL Licences WithOut CALS ) is around $50K~$60K. I have needs for around 200 users and my apps works only works in Microsoft environment (Window Server and SQL Server). I've talked to a lot of "so called" experts and every one had different vision for my data center, as much I talking with vendors I am more puzzled how I should design my data center, perhaps this is because I did not meet yet any SQL Server expert who knows what to do. My living area is so poor whit IT Specialist. My app is not mission critical and We can afford down time up to 1 working day (8 hours). I now have 1 main database (80GB-DATA and 20GB of SQL Server log) for writing transaction I have around 200 users which over WEB server communicating with my SQLs Here is what I considered so far to do.
Personally I do not have experience with virtualization and storage systems. I more like SQL Server replication. Here is some questions which I seek to have direction which way to go. Should I avoid virtualization for machines which is dedicated to running SQL Server ? Is SQL Server 2012 ready for production? I know that this is perhaps silly question but one of IT guys with plenty of certificates say that they always running production instances on second to last Microsoft platform, I am really confused how is He strong in claiming that they not going to instal and configure MS Servers (OS, SQL Server) on 2012 generation. Do I need SSDs beside plenty of RAM. Unfortunately I can't do much in redesigns queries! Here is what I have captured during 3 hour monday peek time. Reporting Database (replication subscriber) This is transactional DB (replication publisher) Trasaction DB |
Posted: 02 Sep 2013 05:52 PM PDT I am having a little trouble wrapping my head around filtering a query in the way that I need. My current sql: Which produces a result like this: I have also tried wrapping the end What I need is to be able to return the product identifiers that match both the last two The number of these |
How to -create- performance issues? - oracle [on hold] Posted: 02 Sep 2013 11:43 AM PDT I want to dedicate some time to learn more about performance and tuning issues. I assign for that a clean DB. I want to know how can I load into it some data and performance-problem queries/DML/DDL? Do you know some scripts that can cause/generate that (purposely or not)? The idea is to learn the basic(and maybe more) of em/awr/addm etc - tuning and performance wise. I dont really know what specific to ask, but I'm guessing index problems, selects issues, wrong way to access the data, etc.. Books are great but I have the knowledge I need some actual work on that. Maybe its not a Q&A at all, but for me any answer (as long as it is an answer to my question) will do. EDIT I dont want just a server load script - because thats has no point, plus thats not what I want. |
Access denied when disabling agent job, despite SqlAgentOperator membership Posted: 02 Sep 2013 07:58 PM PDT I am attempting to disable a SQL agent job from an account which a member of the SqlAgentOperator role (but not sysadmin) The doco says this should be possible
...and indeed the code for msdb.dbo.sp_update_job appears to support this - it has explicit handling for exactly this scenario. Yet if I run: ... I get the following error:
The error message appears to indicate the proc's not even being run, but even granting that user explicit EXECUTE on that stored proc doesn't seem to fix it. Does anyone know how to grant a user the ability to disable/enable SQL agent jobs that they don't own, or has successfully used this functionality. Maybe it's just broken in SQL 2012 SP1 CU4 (which is what I am using) |
pg_dump format => custom vs sql Posted: 02 Sep 2013 03:48 PM PDT There are 2 main formats for pg_dump: custom vs sql. For custom, it's compressed by default, and not readable if you try to open it. But is it faster to dump into this format as opposed to SQL format. Or is the speed the same? |
How to make Postgres autovacuum not impact performance? Posted: 02 Sep 2013 03:49 PM PDT I am running into problems where Postgres autovacuum processes are interfering with the overall performance of my database. It is making my select, insert, and update queries slower than usual. What are some settings, and optimization strategies that will make autovacuum have less of an impact on my database performance? Is there a way to make it not consume as much memory/CPU so that client queries can run more efficiently? |
Why doesn't running a full sample *always* improve query performance Posted: 02 Sep 2013 02:42 PM PDT Conventional wisdom would suggest that running a full sample of statistics would provide SQL with the optimum information to make the best decisions when forming a query plan for execution and therefore the best performance. However, I have a number of queries (produced by Business Objects) which prefer a sample below 100%. The exact percent required for sample varies for success. What I want to know (but am struggling to find online) is why is why a 100% sample doesn't produce the best performance. Index maintenance is regular, every night following the 'Ola Hallengren' method. The nature of the query makes it difficult to post (contains sensitive information) but those who are familiar with business objects queries will know they can be unwieldy. . .rather unusual in their construction, I often think its their setup which pokes the performance. Thanks! |
How to run a SELECT query within while loop in PHP? Posted: 02 Sep 2013 05:18 PM PDT Within a but this does not work. I cannot |
Oracle 11g http listener configuration for PL/SQL server pages Posted: 02 Sep 2013 08:53 AM PDT Could please someone point me how to configure oracle db to be able to display PL/SQL Server pages. I have successfully created and mapped dad with the Here is the listener: When I want to enter
I get error: No data received. How do I solve this? |
MySql one time event never runs? Posted: 02 Sep 2013 01:18 PM PDT Please have a look at below events I expect event Test1 to run one time after 20 seconds but it never runs. Event Test2 is working fine. Any idea? Thanks. Ok sorry it is the alter that is not working At first i did create EVENT Test1 ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 20 second ON COMPLETION PRESERVE ENABLE DO then shortly after i did alter EVENT Test1 ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 20 second ON COMPLETION PRESERVE ENABLE DO Expect event Test1 to run again in another 20 secs but it didn't. |
Getting 'specified network password is not correct' when trying to change password Posted: 02 Sep 2013 09:00 AM PDT After changing my network password, I went straight to the SQL Server Configuration Manager (SQL Server 2008) to update the passwords for the SQL Server and SQL Agent services. (Background info: the services run under my domain account.) However, I kept getting the below error after clicking on apply: I guarantee the password entered was definitely the correct one because I have restarted my workstation and used it to log on successfully. Most answers on the web tend to focus on the password but I know that's just the symptom not the root cause. I know it is difficult to answer questions like this one unless you have experienced it first-hand. So I am asking if anyone else has seen this behavior before and what they did to resolve it. |
Slow SSRS Report in production Posted: 02 Sep 2013 12:56 PM PDT I have an SSRS report which gets its data by firing a series of stored procedures. Now the report is timing out big time when run in production, yet when I pull down the prod database and restore to development the report runs fine. I was thinking to set up a sql server profiler trace in production and hopefully that will tell me something... eg high Disk I/O at the time it's being run. What else should I be doing? Something with perfmon? |
How to run a cold backup with Linux/tar without shutting down MySQL slave? Posted: 02 Sep 2013 04:18 PM PDT I run the following before tar-ing up the data directory: However, tar will sometimes complain that the The slave machine is in a cold standby machine so there are no client processes running while tar is running. CentOS release 5.6 64bits, MySQL 5.1.49-log source distribution. |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment