[how to] Filtering records not present with specific conditions in another table |
- Filtering records not present with specific conditions in another table
- How to count number of consecutive events matching status of last event?
- Is backing a website by a SQL Server 2012 OLAP cube considered reasonable?
- Oracle schema migration to new database with zero downtime
- MySQL import LOCK doesn't prevent PRIMARY key issues
- SQL restore duration
- SQL Developer: how to script output to query results grid
- Insert performace with Geography column
- I can't get MySQL56 service to start after i stopped it
- count(PK) yields different result than left join ... is null
- LDAP in SQL Server 2012
- Recommended distributor architecture for transactional replication?
- SQL Server not giving back memory - staying at Max Server Memory setting
- Shrink transaction log while transaction in progress
- How do I remove partitioning?
- Best practice for storing record metadata
- how to view content of Sybase procedures?
- How can I model this problem in a graph database?
- pg_dump 9.2.x command does not work with pg_dump 9.2.3
- Primary replica set server goes secondary after secondary fails
- speeding up a query on MySql
- Newly discovered SQL Server Express has all databases inaccessible/offline
- MySQL backup InnoDB
- SQL Server 2008 R2 replication high delivery latency
- Strange characters in mysqlbinlog output
- Connecting to a SQL Server database from a Flash program
- Oracle Express edition on Ubuntu - control file missing
- Are regular VACUUM ANALYZE stil recommended under 9.1?
Filtering records not present with specific conditions in another table Posted: 13 Jun 2013 08:18 PM PDT I have a table (let's call it And so on. I am building reports on other tables but need to filter out records that are present in the extra_info table as 'test'. So in the example above, records 1-10, 13-14, 16-17 are fine because they are not in extra_info, 11 needs to be filtered out (it is 'test'), 12 is fine (but obviously just need to have it once even though is has 2 labels, 15 is fine (not 'test'), and 18 needs to be removed. I have done this with subquiry in the WHERE clause: This does its job, however, I have a suspicion that a regular join would have worked faster. What's the right syntax for it? If I just join and filter out the ones with data_key='test', #11 would still be there - so this is not a solution. Grouping by account_id will probably has performance implications and basically same impact as the current code, right? Any ideas? |
How to count number of consecutive events matching status of last event? Posted: 13 Jun 2013 03:48 PM PDT In MS-SQL Server 2008, I have a table of events in the following format: I need to analyze this table to produce a result that:
I've made some progress toward this task, I can identify the controllers whose most recent event was a failure - see this SQL Fiddle. But I could use some help finding an approach to counting the # of consecutive failures that has led to this state. Thanks! John |
Is backing a website by a SQL Server 2012 OLAP cube considered reasonable? Posted: 13 Jun 2013 04:09 PM PDT I have been tasked with architecting a solution for a large retail chain. They want to allow each of its 1.2 million customers to log on to a web site to see the distribution of recent purchases (current month, previous month, year-to-date) over about 50 categories. Data will be updated once every day. I am thinking of putting up a SQL Server 2012 based OLAP cube and letting the website query this cube directly, leveraging features like proactive caching. However, being a developer at heart, I have next to no experience with the analysis services parts of SQL Server, so am quite concerned about the performance of this solution. Does connecting a web site directly to an OLAP cube sound like a feasible solution? Do such systems react to the load from multiple users roughly like a SQL Server, making this a reasonable solution, or do they act completely differently? I don't expect users to check their status very often and I will of course be using caching on the webserver etc. |
Oracle schema migration to new database with zero downtime Posted: 13 Jun 2013 01:17 PM PDT I have a problem: I have two production schemas on one database each serving two different appplicaitons. I need to migrate (move) one schema to an entirely new database that will be dedicated for that application with zero downtime. SO EXPORT/IMPORT can not be used. I have license to even active data guard. But if I setup data guard from current database DB to new DB, should I switch over OR failover or what else so that the current primary will continue to be primary in its role and only the new DB will change its role from standby to fully operational primary. Then we simply direct the application to new and start the new db. Thanks for your help. |
MySQL import LOCK doesn't prevent PRIMARY key issues Posted: 13 Jun 2013 03:44 PM PDT I am using Django and only want to backup mysql data (not it's structure; Django will do that, and it will otherwise mess up Django-South's migration checks). Some info;
What I'm trying to achieve is some mechanism with ..and restore it like so; ..but then with the ability to store the following on the dump (and thus restore it that way on import); When doing
When importing
Other issue My first thought was to just ignore the PRIMARY keys somehow and just look-increment them again on insertion. The problem is that I can't do this due to their bindings with Foreign Keys, that would then thus break. |
Posted: 13 Jun 2013 12:06 PM PDT I am trying to determine the duration of backup restore. I executed a few restore command to restore some backups located on network share drive. Here is command and summary, Backup A The service account is enabled for instant file initialization. My question is why the restore duration varied on different restore attempt on the same backup file to the same machine? Is it purely on network throughput (perhaps someone was doing something on the network share and stuff), or something else like cache or something in SQL internal? |
SQL Developer: how to script output to query results grid Posted: 13 Jun 2013 11:55 AM PDT I am used to MS SQL Server Studio where I can DECLARE and SET variables and use them in my query, and then be presented with the results in a datagrid that I can copy and paste into Excel. In Oracle SQL Developer, I can get simple query results returned in the 'Query Results' grid, but if I need to script something more complex (with variables), I need to use the 'Run Script' option and my results are ouput to the 'Script Output' window, where it's much harder to select the results for copying into Excel. Here is example code, where the variable :v_yr is used three times: In my Googling so far, it seems that in order to display these results in a grid, I will need to set up this query as some kind of stored procedure, assign the results to a cursor. I don't know enough about Oracle programming to understand this answer yet. |
Insert performace with Geography column Posted: 13 Jun 2013 09:34 AM PDT I've been tasked with inserting data into a SQL Server table with a geography column. I've found that my times for doing inserts (same data 1.5 million rows) go up increasingly. I started out with no geography column and it took 6 minutes, then I added a geography column and it took 20 minutes (again same data). Then I added a spatial index and it took 1 hour and 45 minutes. I'm new at anything spatial, but this seems like really bad performance. Is there anything I can do to help speed this up or is this just the performance I'm going to see when dealing with SQL Spatial? |
I can't get MySQL56 service to start after i stopped it Posted: 13 Jun 2013 10:18 AM PDT I am having an issue on my server. I am running MySql on Windows Server 2008 R2. I have done some changes to my.ini file so I had to restart the MySQL for the changes to take effect. and when I went to services and tried to start MySQL56 windows gives me this error Windows could not start the MySQL56 service on Local Computer. Error 1067: tHe process terminated Unexpetedlly. I tried rebooting my server and that did not work. I restored the changes that I made to my.ini and that did not work. What can I do to bring mysql back up? Thanks |
count(PK) yields different result than left join ... is null Posted: 13 Jun 2013 08:14 AM PDT I have a really strange behaviour here. Situation is, I added rows to my table Now I wanted to make sure everything is okay and queried The difference between the two count(product) values I receive from this query is about 2000, which was too few for me. When I do I get about 6000, which is okay. The column Question is, how is it possible that those two queries deliver different results? Thanks in advance. |
Posted: 13 Jun 2013 04:20 PM PDT In our current system, we query the LDAP using SQL Server 2008R2 but are moving to SQL Server 2012 where we have hit a snag. According to http://msdn.microsoft.com/en-us/library/cc707782.aspx one of the discontinued features of SQL Server 2012 is the AD helper service. Does anyone know of a way to query the AD from SQL Server 2012? |
Recommended distributor architecture for transactional replication? Posted: 13 Jun 2013 09:59 AM PDT We are planning to create a Transactional replication on 2 nodes of SQL Server 2008 R2, each node will be a publisher and a subscriber. What is the best practice in order to have this scenario done? Microsoft always recommends that the distributor should be independent from the publisher or the subscriber, so it is obvious that a 3rd node is to be involved in the scenario. But, as each node will be at the same time publisher and subscriber, can the same (3rd node) be the distributor for the 2 publishers? Should I create 2 distributors, or can it still be on the same architecture, using only 2 nodes, each one acting as distributor as well as publisher? |
SQL Server not giving back memory - staying at Max Server Memory setting Posted: 13 Jun 2013 10:41 AM PDT I'm the only one working on this server and followed documentation about optimizing SQL memory usage when I saw that SQL was using all of the memory available of the server. One of the steps I did is to fix the max memory size to 20 Gb (server for test purposes with 32 Gb memory). After tuning, yesterday, SQL Server was only using 3-4 Gb but today, it has again reached the max memory usage I allocated. Why is SQL Server keeping all 20 Gb? What will happen when users will begin to use the server if the max memory size is already used? |
Shrink transaction log while transaction in progress Posted: 13 Jun 2013 03:35 PM PDT I have a several-GB import into my SQL Server 2005 (Standard Edition) database. The import is separated into 4 parts, each part in its own transaction. As you can guess, the transaction log files are quite big. The space on my disk is also limited, but should fit the data. While doing my import the free space on my disk is about 2 gb. Does it work to shrink the files, while my transactions are in progress? Would it also have an effect? |
Posted: 13 Jun 2013 04:28 PM PDT There is a table I'm getting this error: But when I run above code in MySQL version 5.5, this code can be run. If my_table has been created partitions, and when I run above code, not only MySQL 5.5 can run, but also MySQL 5.6 can run too. I guess that if this code can be run, MySQL 5.6 need to be config some files.But I am not able to why. Can you solve it? |
Best practice for storing record metadata Posted: 13 Jun 2013 04:26 PM PDT What is the best practice for storing metadata of individual records in a database? I need to store common meta data such as creation time and time of last update for many tables in my database. I found a few different solutions:
Are there more options, pros or cons than the ones I mentioned here? And what is the best practice for storing this meta data? |
how to view content of Sybase procedures? Posted: 13 Jun 2013 09:42 AM PDT I would like to view the code of my Sybase stored procedures. I recognize that show procedures gives me the list of procedures. Can anyone instruct me on how to get the actual code for the stored procedures? |
How can I model this problem in a graph database? Posted: 13 Jun 2013 11:26 AM PDT I have a project that I'm working on where I extract data from PDFs and map/visualize the relationships between the extracted pieces. Here's an example of my problem: How can I model this in a graph database like Neo4j? I would like to be able to be given one piece of data (like a person's name) and find all related (images, co-mentions, authors, etc.) at up to 10 depth. Here's what I'm thinking for the structure, but I'm not sure if it's a good approach: (this isn't any kind of actual syntax) Is this the right way to structure this data in a graph database? |
pg_dump 9.2.x command does not work with pg_dump 9.2.3 Posted: 13 Jun 2013 01:21 PM PDT This is how i used to create all the time backups of my databases with PostgreSQL 9.2.x: However on a machine where PostgreSQL with version 9.2.3 is running I get all the time the error "too many command line arguments" starting with "-Z". If I remove -Z he is complaining about "-U"? What is wrong? Okay there is definitely sth. wrong. I removed argument for argument until i just started pg_dump. Then i added argument by argument and it worked. So I opened another command line window and tried the same: First "too many arguments", after calling only "pg_dump" and adding further arguments it worked... |
Primary replica set server goes secondary after secondary fails Posted: 13 Jun 2013 03:27 PM PDT I have a 2 servers replica set that, after the secondary fails the primary goes into secondary mode while the secondary is in STARTUP2 (recovering). The problem with this is that I can't use the collection stored in that replica set freely, I'm getting errors trying to use the collection: Sometimes if I restart the mongod instances, the server rs2-1 is the primary for a while, but after some time (while the secondary is recovering) I see this in the logs of rs2-1 (the primary): Is there an easy way to make the primary keep being primary after the secondary fails? Am I doing something wrong? Thanks in advance! |
Posted: 13 Jun 2013 01:27 PM PDT I have a table with more than 10 million rows and 10 fields(columns). There is an index on field_1 and I am running the following query. The data type of all the columns is varchar(200). This query is not able to produce the result even after running for more than 1 day. Any suggestions on getting the results quickly will be helpful. |
Newly discovered SQL Server Express has all databases inaccessible/offline Posted: 13 Jun 2013 07:27 PM PDT We have several remotes sites at my company and today one of the employees came to me and said their SQL reporting tool stopped working. Up until this point I did not even know this server or database or reporting service existed! I have RDP access to the server and can open The webpage they go to for the reports gives them the following error: When I try to view the SQL Server Error log I get this error: I do not have the here is the ERROR.LOG I can provide older logs too if it will be beneficial. I'm reading those now from here.Results from query Image for Kin's request |
Posted: 13 Jun 2013 12:48 PM PDT I have a VoIP server running 24x7. At low peak hour at lease 150+ users are connected. My server has MySQL running with InnoDB engine on Windows 2008 platform. I like to take at least 2 times full database backup without shutting down my service. As per Peter Zaitsev - the founder of percona, mysqldump –single-transaction is not always good. read here if you are interested As I'm not a DBA, I like to know in my scenario, which would be best solution to take a database backup? Thanks, |
SQL Server 2008 R2 replication high delivery latency Posted: 13 Jun 2013 02:27 PM PDT I am seeing an unusually high delivery latency between our distributor and subscribers and i do not understand why. We have in this configuration 3 sql servers using transactional push replication to replicate data from one master server to two reporting servers. We have 9 publications. The distribution agent for most publications are showing under 5ms but one is show as 2000+ms to both subscribers. The suspect publication has only 4 small articles (tables) that rarely, if ever, change. Ive checked and each table has an primary key. ive also checked the @status parameter for each article according to the MS KB: The distribution agent may experience high latency when you configure transactional replication with articles that are configured not to replicate changes as parameterized statements Im tempted to start droping articles to find out if one particular table is the culprit. Doe anyone have any suggestions as to what I can look at? |
Strange characters in mysqlbinlog output Posted: 13 Jun 2013 10:27 AM PDT Has anyone experienced this? Data replicates fine but when output in mysqlbinlog there are hidden characters that break the input?
Thanks! Julie |
Connecting to a SQL Server database from a Flash program Posted: 13 Jun 2013 11:27 AM PDT I currently have the ability to utilize Microsoft SQL Server 2012. I am developing a project with Adobe Flash Builder 4.7. If I link my database with Adobe Flash Builder is there any additional steps I must take in order to make the database live, or as long as my computer is running will this database be accessible from any device that is utilizing it? In other words is this a LAN only system or does it automatically make itself available for the programs I link to it? |
Oracle Express edition on Ubuntu - control file missing Posted: 13 Jun 2013 04:27 PM PDT I have installed the Oracle Express edition on Ubuntu as mentioned here. I am facing issues when I try to create a sample table. Started oracle Started sqlplus Executed the CREATE command After a series of research on web, I tried to shutdown and restart oracle: Shutdown command Started the oracle instance I realized that the control file is missing at So I tried to create the control file as follows: Tried to create the sample table again So I tried to issue the following command What should be done next? I am clueless as I am not a database guy. Note: Output of |
Are regular VACUUM ANALYZE stil recommended under 9.1? Posted: 13 Jun 2013 01:01 PM PDT I'm using PostgreSQL 9.1 on Ubuntu. Are scheduled VACUUM ANALYZE still recommended, or is autovacuum enough to take care of all needs? If the answer is "it depends", then:
I'm asking because the scheduled VACUUM ANALYZE is impacting my reporting. It runs for more than 5 hours, and I've had to kill it twice this week, because it was impacting regular database imports. From the docs, autovacuum should take care of transaction ID wrap around as well. The question stands: do I still need a VACUUM ANALYZE? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment