[how to] PostgreSQL, How to keep only one schema? |
- PostgreSQL, How to keep only one schema?
- SQL Server Transaction Backups vs Logs
- Moving one TempdB on 3 instance server
- Oracle SQL, how to filter on a set by a rank?
- MySQL shared server regular lockup
- What's the proper term for the diverse systems to which a DW connects?
- problem running xp_cmdshell when Application Name set in connection string
- How should I set up these tables in my relational database?
- How does SQL Server generate a query execution plan that adds up to 6,000%?
- exec xp_cmdshell 'sc \\<servername/serverip> start <service_name>'
- Best practice for copying tables from server to server
- SQL Server 2012 AlwaysOn: need to automatically add databases by script- T-sql or PowerShell
- How can I join one to one to (many to one) without using group_concat
- How to optimize a query that's running slow on Nested Loops (Inner Join)
- Buffer usage for index with no reads
- How can I verify that a full database restore reflects the exact source database in SQL Server?
- How to recover data from corrupted innoDB table from a crashed mySQL server?
- How to permanently delete a row in oracle forever
- Executing Child packages without using master package
- Can SQL Server 2008 / 2012 have execute permission in a function for a user, but not for a table used in the function?
- Migrating from SQL Server 2005 to SQL Server 2008 R2 on a different box
- MSDTC on SQL Server 2012 with mount point for data disk
- Backing up MySQL database
- SQL 2008: Why is *any* user, with minimal permissions able to restart/stop the SQL instance through Management Studio
- Optimizing queries on a range of timestamps (two columns)
- Payment methods conceptual and logical model
- How to check the status of SQL Server?
- SQLITE: A problem of tags and products
PostgreSQL, How to keep only one schema? Posted: 17 Apr 2013 09:05 PM PDT I don't want some users be available to create schemas, and only use the public schema. I'd like to remove the permission of creating schemas, but I can't found anything on PotsgreSQL 9.1 documentation. How can I do this? |
SQL Server Transaction Backups vs Logs Posted: 17 Apr 2013 09:18 PM PDT I've inherited a moderately-sized business-critical SQL Server 2008 database and am trying to wrap my head around backup planning. (I'm a developer, not a DBA.) The way our system is set up right now there are two backup systems:
In the event of an emergency restore (or when restoring backups to a dev machine), my procedure is to use the My questions:
|
Moving one TempdB on 3 instance server Posted: 17 Apr 2013 09:07 PM PDT I have a large virtualized SQL Server (Full 2008R2). I run 3 SQL instances and would like to relocate the TempdB database file to another location, splitting off from the TempdB log file. The trouble I am having is that, even though the query "ALTER DATBASE" executes successfully, and then restarting that particular instance's SQL Server Service, does not relocate that database file. Do I have to restart the whole server to move this database? |
Oracle SQL, how to filter on a set by a rank? Posted: 17 Apr 2013 04:24 PM PDT I am struggling with the following SQL query right now: http://sqlfiddle.com/#!4/736a7/12 Basically I am trying to get UserCredential that are Many To One with Credential. Credential is One To Many with HealthSystemCredential. The HealthSystemCredential has a foreign key to AccessLevel, however this AccessLevel column in HealthSystemCredential is actually the Lowest Allowable Access Level. AccessLevel records have a Type Rank between 1 at the lowest and 4 at the highest.
The problem is I can't figure out how to do this in one query. Is there some way I can get a DENSE_RANK column in a subquery of AccessLevels, apply my filter in the subquery and join it to the table somehow to give me all the applicable HealthSystemCredentials I want? |
MySQL shared server regular lockup Posted: 17 Apr 2013 04:20 PM PDT I have a web/ database server with 5 websites on it, all of which execute fairly complex queries regularly. Some of these queries appear in the slow log and take around 0.6-0.8 seconds. However I have a regular occurrence of such queries coming from 2 specific websites with 2 separate database that seem to lock each other up. What I am finding in this case in top is a MySQL process occupied with about 20-30% of a CPU and about 30 Apache processes taking up about 30% CPU each. This situation goes away after about 15-20 minutes or if I restart the MySQL server. The slow query log shows more slow queries at this time. The problem never seems to involve only queries to a single database, but always to the same 2 databases (and websites) The server is on a VPS with 4GB of RAM, 5 CPU cores on Ubuntu 12.04 Any idea how I could diagnose what is happening? |
What's the proper term for the diverse systems to which a DW connects? Posted: 17 Apr 2013 03:39 PM PDT The data warehouse I've built gets data from transactional databases, web services, flat files, and a few other bits and pieces. It also writes back to some of these, and to analysis tools. I'm building a unified system for passing changes between these different systems, and it occurs to me that I don't have a single simple term that covers them all.
What term do you use for all the systems you integrate, as part of a data warehouse or just in general? Ideally I'd like a single word which will make intuitive sense to both DBAs and non-technical parties. |
problem running xp_cmdshell when Application Name set in connection string Posted: 17 Apr 2013 02:58 PM PDT I have an app on 10 or so servers that runs a few xp_cmdshell statements against MSSQL 2008. It works fine on all the servers except for one. To make things worse, I can run all the commands in SQL Management Studio, but in the app, they don't work. I even made a temporary app to test and it works fine! But in the deployed app, I get a simple SQL error "Access Denied". I have narrowed it down to the connection string, if I include the application name It throws access denied only when calling xp_cmdshell, normal SQL statements works fine. But if I remove the application name It works fine for both normal SQL statements and calls to xp_cmdshell. The strange thing is, its only happening on one of the ten servers. The only difference being that the server has SP1 and the others do not. @@VERSION returns I am thinking that there is some sort of authentication that can be granted to applications, but I cant seem to find anything. I can replicate it in SQL Managment Studio by adding to the Additional Connection Parameters tab on the Connect to Database Engine dialog when you create a new query or change its connection. The simple test statement I use is If anyone could shed some light on what is happening it would be much appreciated. |
How should I set up these tables in my relational database? Posted: 17 Apr 2013 06:31 PM PDT I'm working on a project where a person can create a survey with custom questions and display the questions in whatever order they choose. They can break the questions into sections. The problem I'm having is that there are 2 or more groups of people. Each group can have the same question in different sections and/or in a different order. My tables right now are: Is this okay or is there a better/more efficient way of doing this? Any help would be appreciated. |
How does SQL Server generate a query execution plan that adds up to 6,000%? Posted: 17 Apr 2013 01:55 PM PDT Today I was on The Heap and was looking at a query plan for something I thought could be improved. However, it created something that shook my belief in the SQL Server query optimizer. Can I still trust sql-server if it cannot even count to 100%? Characteristics of the table:
Has anyone seen this before and what causes the plan to look so skewed? And below from SQL Sentry Plan Explorer
|
exec xp_cmdshell 'sc \\<servername/serverip> start <service_name>' Posted: 17 Apr 2013 03:37 PM PDT When I execute this on sql server on another machine (in same sub net) I am getting "Access is denied". But when I did it from command prompt (sc \ start ) it is working. Please advice why I am getting access denied when I am doing it from sql server using exec xp_cmdshell cmd |
Best practice for copying tables from server to server Posted: 17 Apr 2013 07:12 PM PDT Not being a DBA and not having sysadmin privileges to do a backup/restore, would the following be a good solution for copying a set of tables? (I have a linked server from copyTables copyTable |
SQL Server 2012 AlwaysOn: need to automatically add databases by script- T-sql or PowerShell Posted: 17 Apr 2013 02:35 PM PDT I've installed and successfully configured our SQL Server 2012 AlwaysOn 2-node servers for our new "Intranet" that is coming out. I've gotten AlwaysOn working great, and our Front End servers for the Intranet will be using SharePoint 2013. The glitch is that SharePoint 2013 is configured to add databases automatically to our SQL Server 2012 back end, but NOT to AlwaysOn. In reading about this and in contacting Microsoft MSDN support, the default answer is "you must manually find, select, back-up and then add those new databases individually to get them into AlwaysOn." But wait; that can be quite a task, constantly checking the SQL Server back-end servers to see what databases were created, then having to add them into AlwaysOn, 7/24! I'm looking for a script or process that will check for new databases, back those new databases up in FULL mode, (for being added to AlwaysOn, of course) then add those databases to AlwaysOn, all automatically. Or have this run every...1-2 hours? (without user intervention) What I've come up with so far is this script that actually identifies the newly-added databases, (not yet in AlwaysOn), and then backs them up to a shared location. My next task is to find those newly-added databases and through the various processes needed, get them added to AlwaysOn. This will involve some sort of looping action, I imagine. I'm not a T-SQL/scripting guru; is there any solution or script that I might access that would do this? (add databases to AlwaysOn automatically)? Please advise, I'm sure I'm not the first person to have this issue. I have seen previous posts on various Internet Sites (including this one!) , and the solution is either incorrect, or states something like "sure, go ahead and just script that!". Thanks, but I need just a little more detail there. Thanks again, -Allen |
How can I join one to one to (many to one) without using group_concat Posted: 17 Apr 2013 11:36 AM PDT I have a table structure as follows: What i need to achieve is as follows. I need to get each user_item and get the data, emails, and ips associated with it. user_items links to data, data_emails, and data_ips by the columns 'name' and version. data_emails and data_nameservers link to emails and ips respectively with email_id/ip_id = ID Because multiple joins result in row multiplication I have had to use nested subqueries. Because a user can have multiple emails and ips associated with an item I have used group_concat to group all that particular row.. I then explode this column in my PHP - this in itself seems inefficient, but I see no other way of doing it? Now I am appreciative that I could speed this up with proper indexes (which I am getting my head fully around atm), but I am doubtful that there would be decent speed improvements. Others have likewise been doubtful. Without nested subqueries and group_concat noone on the Internet seems to be able to propose another way of doing this.. I ideally want to be able to do this in a SQL query as I want to integrate search functionality whereby a user for example can search for items he has which have a given email associated with them. This would be incredibly complex/resource intensive if the tables are not joined. Does ANYONE have any proposals as to how a SQL join such as that described could be implemented without huge load times? Thanks |
How to optimize a query that's running slow on Nested Loops (Inner Join) Posted: 17 Apr 2013 02:54 PM PDT Consider the following simple query (only 3 tables involved) This is a pretty simple query, the only confusing part is the last category join, it's this way because category level 5 might or might not exist. At the end of the query I am looking for category info per product ID (SKU ID), and the that's where the very large table category_link comes in. Finally, the table #Ids is just a temp table containing 10'000 Ids. When executed, I get the following actual execution plan: As you can see, almost 90% of the time is spent in the Nested Loops (Inner Join). Here's extra information on those Nested Loops: Note that the table names don't match exactly because I edited the query table names for readability, but it's pretty easy to match (ads_alt_category = category). Is there any way to optimize this query? Also note that in production, the temp table #Ids doesn't exist, it's a Table Valued Parameter of the same 10'000 Ids passed on to the Stored Procedure. Additional info:
|
Buffer usage for index with no reads Posted: 17 Apr 2013 02:14 PM PDT Using SQL Server 2008 R2, the main transactional table in a vendor database is massive compared to all others and has about 14 indexes. Some of these indexes don't get used in our environment, but we can't remove them. That's not a problem, it's just something we have to live with. The question I have is about why some of these low-or-no-read indexes seem to be using memory - more than other indexes on the same large table that get used often. I would have thought that most of the buffer cache would go towards objects that are read from frequently. The only thing happening on these indexes is write overhead. For example, one of these low-read indexes is being allocated about 2 GB of memory (58% of the index's total size) and another has 1.7 GB of memory (27% of its size). Meanwhile, the monster-sized and well-used clustered index itself only has 4 gigs (2% of its size). A different NC index with lots of reads only has 100 GB in the buffer cache (5% of its size). Looking at the physical stats, I can see the fragmentation is pretty bad. That's understandable from all the writes on this table and the non-sequential inserts. I'm not sure if it could be related to memory usage, though. Looking at the operational stats for these indexes is also interesting.
I realize this is an abstract question and that I'm not providing many actual stats. I'm just curious about how SQL Server comes to these buffer cache usage decisions and wonder if anyone out there understands it. |
How can I verify that a full database restore reflects the exact source database in SQL Server? Posted: 17 Apr 2013 09:02 PM PDT We are decommissioning our old SQL Server 2000 Ent. instance in favor of SQL Server 2008 R2 Ent. My planned migration path is:
I am being asked to provide conclusive proof that every single transaction "made it" and that the data is an exact replication of what existed on the 2000 instance. I hope that I can use the following documentation as evidence: However, if this is not sufficient, the only thing I can think of is to rip through ever row of every table of every database and calculate a checksum (on both instances) as well as get row counts for every table in every database. Is there any better way to satisfy the "exact replica" verification criteria? I'm also open to better documentation. |
How to recover data from corrupted innoDB table from a crashed mySQL server? Posted: 17 Apr 2013 08:25 PM PDT Some background: I had a MySQL server running on Ubuntu Server. I don't remember which versions they were running but it was likely a Ubuntu 12.04 with MySQL 5.5. During the course of a day, connections from the client application to the server kept dropping. On the next day, the server crashed and mysql wouldn't start again. I was also getting lots of disk errors so I decided to attempt to recover the data on a new machine. What I tried: I tried to recover the data with a Ubuntu 12.04 running MySQL 5.5 but I couldn't get the service running. Before troubleshooting it, I tried the same steps on a Ubuntu Desktop running mySQL 5.1 that I had available, and they worked. Here's what I did:
The service started and everything seemed to work, except on one table. This is a table with BLOB data. After some trial and error, I narrowed the problematic queries to record ids above a certain number (roughly, the last 100 records of a total of 7000 records). Sample of some queries and outputs: Then, I started trying to dump the data in an attempt to recover it: I looked into mysqldump's options and decided to try to add To make sure the problem's in this table, I tried: ... and the command completed successfully. I tried to delete one of the offending records and the delete command completed successfully. I guess I could delete them all, from the first problematic one and minimize losses but is there any hope of recovering them? What would you suggest I do from here? Sidenote: I was using Navicat to manage the server and make backups, and strangely enough, restoring that particular table with a recent backup also fails with the connection lost error, which leads me to believe that this table has been having problems for a while now, maybe due to disk errors and not specifically due to the crash. |
How to permanently delete a row in oracle forever Posted: 17 Apr 2013 01:13 PM PDT I want to delete a record from a table and no one should ever be able to recover the deleted data. I have done delete operation but the record is still in the undo segments. And one can do flashback and retrive that row. |
Executing Child packages without using master package Posted: 17 Apr 2013 08:30 PM PDT I have many Fact and Dimension packages which are executed using vb script . Today one of my seniors proposed that these packages should be executed individually instead of creating a master package to execute the child pkgs using execute package task .His reasoning was based on the fact that execute package task will create several threads which will be performance intensive when compared to individually executing the packages. Leaving aside sharing of config file and other issue how can individually executing it be more performant that creating master package for all the child packages? By the way the child packages are executed out of process which is configured in the Execute Package Task |
Posted: 17 Apr 2013 01:54 PM PDT I have a function that returns a table. This function SELECTS
Can a user have permissions to read only functions but not tables? |
Migrating from SQL Server 2005 to SQL Server 2008 R2 on a different box Posted: 17 Apr 2013 12:26 PM PDT We currently have a SQL Server 2005 instance hosting a lot of application specific databases (ArchiveManager, SolarWinds, Kaspersky etc). We are planning to deploy a new SQL Server 2008 R2 instance on a new server and slowly migrate across to this instance. All of the applications have fairly straightforward migration plans for changing the database location normally revolving around stopping services, detaching, attaching and rerunning installers. All looks good so far. My only concern is with SQL Server Agent jobs and logins. Initially I was under the impression that these were all stored in the system databases and that I could move these. I understand now that this is not the case. Microsft KB 918992 suggest running a script to generate a stored procedure and then calling this and running the generated script on the new instance. There seem to be a lot of caveats to it though. Another option would be to use SSIS to generate a "Transfer Logins" and "Transfer Jobs" package and run that (both editions are Standard) but I am unversed in SSIS so am a touch scared to use that. I suppose my question boils down to this:
I am trying to get it as clear in my head as possible for my planning documentation so am very appreciative of any replies/guidance. |
MSDTC on SQL Server 2012 with mount point for data disk Posted: 17 Apr 2013 11:29 AM PDT I have installed a new SQL Server 2012 instance on a cluster. The installation went fine but when I've added the MSDTC as resource of the instance, it went wrong. When I just added it, the MSDTC was online but when I tried my first failover, it refused to go online. The error message was "MSDTC failed". My configuration is: a physical drive disk (let say J:) with two mount points "DATA" and "LOG". I read some people who experienced this kind of errors with mount points and SQL Server 2008 but nothing with 2012. I tried to create the "MSDTC" folder on the physical drive (the J: one) but nothing better. Does anyone know something about it or know how to configure the MSDTC and force it to use the J: drive? Thx a lot! |
Posted: 17 Apr 2013 11:22 AM PDT I have quite large and rather important DB to backup. First off, whats best way to do it? Secondly, easiest way I know is to export it through phpmyadmin, but, i'm worried about options, although I'm quite sure this "Add DROP DATABASE" will not drop actual databases, I'm interested what are these options for? (see pic) Thanks. |
Posted: 17 Apr 2013 01:26 PM PDT So I'm on a project that I need to help admin SQL 2008 . I'm coming from a MySQL background. Why is it that a basic user, with the most basic/default permissions, through Management Studio is able to restart or stop the entire instance of SQL but not view a simple table? |
Optimizing queries on a range of timestamps (two columns) Posted: 17 Apr 2013 03:45 PM PDT I use postgresql-9.1 with ubuntu 12.04. I need to select records inside a range of time: my table this table now contains something like 2.000.000 records. but the time spent were so enormous for a query like: that I tried creating the inversed index: I have the impression that the times spent are better: The time needed for accessing the records in the middle of the table seems to be more reasonable. ( somwhere between 40 and 90 seconds) But's its several tens of seconds for values in the middle of the time range. And twice more when targeting the end of the table (chronologically speaking). So I have tried explain/analyze for the first time. You can see the result here on depeszcom. What could I do to optimize the search ? You can see all the time is spent scanning the two timestamps columns once the id_phi is set to Should I change from timestamp types to something else ? I have read (really little) about GIST and GIN indexes, I just know they can be more efficient on certain conditions for custom types. Should I go this way to have a efficient time index ? |
Payment methods conceptual and logical model Posted: 17 Apr 2013 09:02 PM PDT I need to create a conceptual and logical (normalized) models of parking house according to the requirements below. It looks to me as a very simple concept that doesn't need all tables to have relationships - but then they could not be modelled as entities. I tried asking this on stackoverflow but got no feedback for couple of days now.
The problem is I don't know how to put those highlighted relations to the logical db model and whether event to put them there. Is it ok-practice to have isolated tables in the design? |
How to check the status of SQL Server? Posted: 17 Apr 2013 08:21 PM PDT I need to check the status of SQL Server. When I say status its not the status for each database, but rather for the whole SQL Server. I need to know what are the queries running in the server and no. of connections, which query is taking too long. Can anyone please help me with this.. |
SQLITE: A problem of tags and products Posted: 17 Apr 2013 07:42 PM PDT I'm searching for a way to create a query to do the following: Let's consider 3 tables:
Let's consider this structure for each table: Products:
Tags:
Tag_ties:
What I want: Obtain all the products who are tagged with tags id 10, 11 and 12 for example. This query does not work, as it returns the products having at least one of the tags: |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment