[how to] sql server sum and count |
- sql server sum and count
- sql server-percentage calculation
- Communication link failure for some queries to linked server
- When is data moved during an Oracle partition split?
- SQL Server 2008R2 DBCC CheckDB never finishing
- Why does that query cause lock wait timeouts?
- PostgresSQL: Get single attribute of UDT in SELECT statement
- Optimal drive configuration for SQL Server 2008R2
- Streaming replication WAL files pg_xlog
- Need help with syntax of Instr in Access 2000. Trying to find values 0.0 to 9.9 in a text string. [migrated]
- When is it appropriate to use SQL Server Developer Edition?
- Modeling staff dimension in data warehouse
- Avoiding repetition without creating a view
- How to restore deleted rows from a binary log?
- Database and query optimizacion
- Retrieving cached plans fails due to cached objects of database Mirror
- Fastest way to delete matching rows in MySQL?
- help with simple sql queries
- Create log file only for my executed queries
- Retrieve Log Detail
- PL/SQL trigger error ORA-0001 and ORA-06512
- Segmentation fault (core dumped) with self-complied mysql-5.1.9-beta.tar.gz
- MySQL replication between VPS and shared host
- Slave SQL thread got hanged
- Partition of tablespace
- Restore a Database to a New database (T-SQL)
- Character vs Integer primary keys
Posted: 20 Mar 2013 08:52 PM PDT Sample data : This is my query: This is my sample data actually. I already make the count. As You can see the column header. My problem is I need to sum the count and the for each row I need to get the percentage. For example the total for the above record is 199 so for the first record count is 43 so the calculation should be 43/199 * 100. How can I view the percentage? Please help me I need this urgently. Is there any suggestion to show how to get the count first. then from that sum the count and finally. the count should be divided with the sum and * 100 to get percentage. Thanks |
sql server-percentage calculation Posted: 20 Mar 2013 06:39 PM PDT Sample data : This is my query: This is my sample data actually. I already make the count. As You can see the column header. My problem is I need to sum the count and the for each row I need to get the percentage. For example the total for the above record is 199 so for the first record count is 43 so the calculation should be 43/199 * 100. How can I view the percentage? Please help me I need this urgently. |
Communication link failure for some queries to linked server Posted: 20 Mar 2013 04:28 PM PDT I am seeing the following error in SSMS (server has Windows Server 2008 and Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 installed) when running some queries against the linked server, in particular long running queries. Simple selects from tables across the linked server work fine. This is a new issue that was noticed when SPs that have worked for years started failing. I have run a Wireshark capture on the server, capturing for packets to port 1433 on the linked server host. At the tail of the capture, I see many (10) TCP Keep-Alives being issued (after a message regarding bad checksum) and then an RST packet. The RST packet is correlated with the error below being returned to the client. There are other database servers on our network, where the linked server is configured identically, that don't exhibit this issue. I have found some articles such as this and this. We are using the implicated Broadcom NICs. The Chimney Offload State setting is enabled on the server. We will try disabling. Other thoughts on troubleshooting would be much appreciated. |
When is data moved during an Oracle partition split? Posted: 20 Mar 2013 06:31 PM PDT I need to split an Oracle partition, and I'm confused about whether or not Oracle is going to physically relocate the data when I execute the split. Here's what I intend to do: The problem is that my current MAXVAL_PART has 320 GB of data in it, and I don't want to physically move this data on the disk. If I cut off the partition at 2013-04-01, then there will be no data in the new partition, but I'm getting conflicting information about whether this will still necessitate a move of all the data. Ideally, Oracle sees that my new MAXVAL partition will be empty, defines it in the new table space, and I'm all done. Will this command move the data that's already on the disk, or will it leave it in place and just create a new partition? This article says Oracle will detect the empty partition and do a fast split http://docs.oracle.com/cd/B19306_01/server.102/b14231/partiti.htm But I see in a number of places that a split means all data gets moved to new storage. Which is it? |
SQL Server 2008R2 DBCC CheckDB never finishing Posted: 20 Mar 2013 02:20 PM PDT I'm running SQL Server 2008 R2 on Windows server 2008 R2 and up until recently we were having no problems. After windows updates recently (no sql server updates) we cannot run step 1 of our maintenance plan DBCC CHECKDB It does not error out, it just runs indefinitely, longest time so far 36 hours. The DB is only about 500MB so I dont think size is an issue. I was just wondering if anyone has seen this sort of issue in the past. |
Why does that query cause lock wait timeouts? Posted: 20 Mar 2013 06:24 PM PDT From time to time, I find a lot of these errors in my PHP error log: The problem persists for about 2 or 3 minutes. Thanks to stackoverflow, the reason was quite easy to find: What I do not understand is: Why? The locked table is very small, only 61 entries (about 30 new and 30 deleted per day, the auto-incremented primary index is near 800.000). No column is especially large. I use the InnoDB enging for this table (one key refers to another table with approx. 20.000 entries) and the problem occurs from time to time. RAM should not be an issue.The webserver and MySQL server run on the same (virtual) machine that usually does not suffer performance problems. Other transactions (there were thousands during the locked minutes) in large tables (1-2 mio. entries) did not make any trouble. Thanks for your hints! |
PostgresSQL: Get single attribute of UDT in SELECT statement Posted: 20 Mar 2013 03:16 PM PDT I created a user-defined type in a PostgreSQL 9.2 database and am trying in the Creation of type and table: The Both times I get an error. Is it possible to access a single attribute defined in a UDT in a |
Optimal drive configuration for SQL Server 2008R2 Posted: 20 Mar 2013 04:22 PM PDT I have a fairly busy database server running SQL Server 2008 R2 that has the following setup:
Assuming I can't add additional drives into this server, have I made the best use of the configuration I have available? Or should I consider another scheme here where logs are isolated from the data files, for example? |
Streaming replication WAL files pg_xlog Posted: 20 Mar 2013 05:40 PM PDT We are using this awesome tool repmgr developed by 2nd Quadrant for streaming replication. We set 1000 WALs to be archived. What I noticed though is that on master we have about 600 WALs in the I might be missing something but I'd expect the same amount of WALS to be on both master and standby ? Is there some magical process which is removing them ? |
Posted: 20 Mar 2013 01:08 PM PDT I am an Access novice trying to use Instr to select numeric data from a text string. My problem is with string2, which returns a 0 for every occurrence when I use If I specify a value such as 1.4 or 3.2 or even a text value such as QJX (not even a number) it returns the value specified, but I need it to select two numbers with a decimal point between them. Please can you help as I am really stuck. Thanks |
When is it appropriate to use SQL Server Developer Edition? Posted: 20 Mar 2013 04:27 PM PDT Do you use SQL Server Developer Edition on server-class machines in DEV and STAGING environments? I am working on a large project where (if it passes the proof of concept stage) we will have several large geographically distributed enterprise class database servers running SQL Server 2012 Enterprise Edition. We will have a production environment will initially have 3 servers, our Staging environment will have a minimum of 3 servers, and our Development environment will have 1 server (hosting three instances). I was under the impression that we would only need to acquire Enterprise licences for the actual production servers, and we could get by with developer edition in our developer and staging environments because they are not "production". Other sources have told me that we need to have an enterprise licence on all of the above mentioned machines, and that the developer edition is only meant for a single developer on their workstation. Since developer edition has all the sexy features of Enterprise, I can't really see the value of it on a workstation class machine, especially for developing and testing the type of high availability system we are building. If we have to fork out Enterprise licences for a dev server, that will just about kill our proof of concept stage, thus killing the project. Forcing an enterprise licence on a staging environment will make management just want to skip staging altogether. |
Modeling staff dimension in data warehouse Posted: 20 Mar 2013 12:44 PM PDT I need to write a data warehouse diagram. I've got a problem with staff area. I have to store a information about workers details like name, age etc., and workers job time and agreement details like job_start, job_end, salary etc. Firstly, I tried to draw a dimension for each data, but I consider whether it should be done like a connection between these dimensions each other? |
Avoiding repetition without creating a view Posted: 20 Mar 2013 03:27 PM PDT Suppose that I have a query Q1 and I need to run a query like the following: I would like to do that:
How can I do that on PostgreSQL? |
How to restore deleted rows from a binary log? Posted: 20 Mar 2013 01:06 PM PDT BackgroundI am trying to recover a few thousand rows that were accidentally deleted from a MySQL database. Luckily replication is enabled, so in the binary logs I have a record of each INSERT, UPDATE and DELETE that was performed on the database since it was created. Unfortunately in a previous attempt to recover the lost rows, someone had copied the entire INSERTS from the binlogs into the database. This created an even bigger mess by adding a bunch of duplicates and changing the ID value which is set to auto_increment. RemediationI created a new recovery database based on a backup that was made a few weeks prior to the incident. I then found the exact point in the binary logs were the backup ended. I extracted all the INSERTS/UPDATES/DELETES from the binlog file and fed them into the new recovery database up until the point were the rows got deleted; this restored the database back to where it was before all the rows had been deleted. ProblemBecause of the first failed attempt to recover the lost rows, a lot of invalid data was added to the original database, along with new legitimate data and because one of the columns in the row is set to auto_increment, this creates a mismatch for the value of the column that auto increments when trying to restore the database. I can't simply COPY/PASTE all the missing legitimate binlog SQL statements, because the value of the auto_increment column has changed due to the invalid data that was inserted, which I don't want. ExampleAn example of the binlog file: OLD database NEW database Now if I were to copy the binlog SQL statements into the recovery database, ignoring the garbage INSERTS which I don't want, the UPDATE statement would fail because the value of `id` would be different. Same goes for DELETE's and INSERT's which rely on the value of `id`. QuestionIs there an "easy" way to recover all the new legitimate data that was introduced to the database after the incident? The only solution I can think of at the moment is manually going through each INSERT/UPDATE/DELETE and adjusting the value of `id`. I am hoping there is a better solution, because with thousands of records, this approach would take a long time. Any help is greatly appreciated! |
Database and query optimizacion Posted: 20 Mar 2013 07:32 PM PDT I have a database containing three tables: tbl_database (main table is the primary), tbl_cmdatabase and tbl_blacklist. The three tables share the same structure but differ in the number of records that can be. This is the SQL structure, that basically is the same: Table tbl_database contains ~194 074 records and they should continue to grow in large numbers, table tbl_cmdatabase contains ~45,742 records and similarly can increase but perhaps not so exaggerated as tbl_database but would increase and finally the table tbl_blacklist having ~92,038 records. I am developing an application where I need to verify that the number that I will insert into table tbl_database is not in tables tbl_blacklist and tbl_cmdatabase so I need to check for every row looking for the number (this queries are slow in this amount of records). What I need to do is to optimize some tables or change the structure or not because when I perform these queries the server tends to fall as the shared hosting does not support large queries, anyone can help me with this issue? Any suggestions? Edit: Added a file for test data |
Retrieving cached plans fails due to cached objects of database Mirror Posted: 20 Mar 2013 07:58 PM PDT I'm trying to retrieve cached execution plans for a specific database object, but this fails because of cached plans which belong to a mirrored database in MIRROR state. The below query, without any additional WHERE clauses is failing with the following SQL error: Msg 954, Level 14, State 1, Line 1 The database "DatabaseName" cannot be opened. It is acting as a mirror database. I suspect what happens is the sys.dm_exec_query_plan function is first trying to parse all the plan handles from the cache, but fails on cached objects for a mirrored database. Does anyone know if there are any ways to get around this, T-SQL wise? Off course I could execute DBCC FREEPROCCACHE to clear the cache, however I'm hoping for other solutions. I'm a bit amazed this function is not discarding any objects from mirrored databases while trying to parse the plans. |
Fastest way to delete matching rows in MySQL? Posted: 20 Mar 2013 12:16 PM PDT I have more than 10 million records in a table. I want to delete the records where an email column contains What is the fastest method to perform the task? |
Posted: 20 Mar 2013 12:03 PM PDT Can I use some help please with the following query Consider the following schema: Suppliers(sid,sname,saddress) Parts(pid,pname,color) Catalog(sid->Suppliers,pid->Parts,cost)sid and pid are in this table FK The primary keys are emphasis and ->Rel indicates a foreign key relationship with the primary key of Rel.
can somebody help me with those queries please? |
Create log file only for my executed queries Posted: 20 Mar 2013 03:24 PM PDT I have been using the following to set up the log file: After that, for testing I just executed a query then I have seen the log file in How can I filter out the queries which were executed along with my query (anonymous queries)? Is there any possibility to log the affected table information like:
And how can I get these details using queries? |
Posted: 20 Mar 2013 03:10 PM PDT I have been using the following to set up the log file: After that, for testing I just executed a query then I have seen the log file in How can I filter out the queries which were executed along with my query (anonymous queries)? Is there any possibility to log the affected table information like:
And how can I get these details using queries? |
PL/SQL trigger error ORA-0001 and ORA-06512 Posted: 20 Mar 2013 04:27 PM PDT I created this trigger a week go, with no compilation errors. But when I insert a record into I can't figure out where I went wrong. How can I fix this? Please note that I can't remove the constraint, and it is a primary key. |
Segmentation fault (core dumped) with self-complied mysql-5.1.9-beta.tar.gz Posted: 20 Mar 2013 06:35 PM PDT I am compiling mysql-5.1.9-beta.tar.gz source code under Ubuntu 12.04. My steps are following: The bug is : (here I wait for a very long time, so I must press Ctrl + C to stop it and continue) After I input |
MySQL replication between VPS and shared host Posted: 20 Mar 2013 12:18 PM PDT I have a VPS where I have ssh access and shared host with CPanel. Both have MySQL. I need to create a master/slave setup among the two server. As per this tutorial: How To Set Up Database Replication In MySQL, I will need to modify the Is it possible to achieve this with the following permissions on both the servers?
|
Posted: 20 Mar 2013 12:30 PM PDT We have a master - slave setup with ROW based replication. We are seeing huge delay's on the salve even though there is no activity running wither on master or slave. When we looked in, we observed the SQL thread looks like hanged. It has been in "Reading event from the relay log" state since last 3 hours or more. Shall some one please have a look into this ASAP. |
Posted: 20 Mar 2013 07:45 PM PDT My postgres server is running in a windows server and three tablespace are created. Postgresql server is installed in C: drive and the tablespaces are in D: and E: drive. So how to take a backup and restore. Is it similar to normal backup and while restore, in another machine, do i need to have the same setup or i can restore it in a single drive? From the tablespace folder, is it possible to retreive information for only one table which is huge? |
Restore a Database to a New database (T-SQL) Posted: 20 Mar 2013 08:45 PM PDT I have a database backup from a database, say http://msdn.microsoft.com/en-us/library/ms190447(v=sql.105).aspx Based on the posts I found on the internet, first I got the logical names of the backup file using script below: Then used the logical names in the following script: Below is the error I'm getting:
I checked the disk space and can confirm it is fine. I assume the target database should exist before running the restore. Is that right? Any suggestion how to fix this issue? Thanks. |
Character vs Integer primary keys Posted: 20 Mar 2013 02:48 PM PDT I'm designing a database with multiple lookup tables containing possible attributes of the main entities. I'm thinking of using a 4 or 5-character key to identify these lookup values rather than an auto-incrementing integer so that when I store these attribute IDs on the main tables I'll see meaningful values rather than just random numbers. What are the performance implications of using a character field as a primary key rather than an integer? I'm using MySQL if that matters. [Edit] |
You are subscribed to email updates from Recent Questions - Database Administrators - Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |