[how to] What type of SQL Server Replication fits our scenario? Merge or Transactional? |
- What type of SQL Server Replication fits our scenario? Merge or Transactional?
- What is a good way to determine the performance I should expect from a PostgreSQL server?
- Postgresql: Difference between pg_dump and backup/restore?
- Increment and record creation based on field value [migrated]
- What does wait_resource LOG_MANAGER mean in SQL Server 2008?
- UNIQUE index key violation
- how to set a row's value from a certain row's value?
- big db for managing user files and settings
- Find "n" consecutive free numbers from table
- How do you move a SQL Server 7 database to SQL Server 2012?
- SOLVED: Postgres 8.4.5: Bad query plan vs Good query plan when changing date range on large data table
- How to generate TPC-DS query for sql-server from templates
- Date dimension changes between Test and Production Server
- Performance Counters for SQL Server Related to iSCSI Disk Access
- Best relational database scheme for this data
- Accessing a SQL Server Denali CTP1 database with Management Studio 2012 SP1 Express
- Database Design Relationship for Patient Visit in MS Access
- SQL Server 2008 R2 database mirroring - is it possible to have a slave server for multiple master servers?
- Issue with ROW_NUMBER and DISTINCT
- I have multiple sources INSERTing into a MySQL innodb table. My periodic aggregation script never makes accurate aggregates. Why?
- where can I get mysql-5.1.7.tar.gz or mysql-5.1.9.tar.gz?
- Can't backup transaction log in SQL Server 2012 Management Studio
- SSIS SQL Server 2012 cluster Availibility Group
- Payment methods conceptual and logical model
- Does my design adhere to 3NF?
- Convert a Unix timestamp to a DATETIME in a View
- Find the first gap in an aggregation of integers in SQL Server
What type of SQL Server Replication fits our scenario? Merge or Transactional? Posted: 18 Mar 2013 08:35 PM PDT BackgroundIn our current setup, we have a single SQL Server 2005 instance on our remotely hosted web server. We additionally have another (non-MSSQL) database which we use for our POS system, which automatically updates the web server when things (such as product information) changes. This has two problems:
The solution I'm working towards is setting up a second SQL Server instance (2005 or 2008) locally at the corporate headquarters, directing the POS system at it, and using some form of Replication to sync changes between them. However, I can't tell whether we should use merge replication or transactional. QuestionWill Merge or Transactional replication serve us better? Our requirements are:
Which one is the better fit for our requirements? |
What is a good way to determine the performance I should expect from a PostgreSQL server? Posted: 18 Mar 2013 02:59 PM PDT Trying to figure out how I should expect my database to perform. Basically, I need to determine when the server I have set up is reaching its performance limits - which would help me plan server scaling better. This question aims more towards ways that I can calculate or estimate (really, any idea of this would be good) expected performance. This should ideally help me come up with a formula I can run based on several factors (like record size, number of rows, etc), instead of a subjective assumption based on a particular server / DB. So, what is a good way to determine expected performance on a PostgreSQL server? Thanks very much! |
Postgresql: Difference between pg_dump and backup/restore? Posted: 18 Mar 2013 07:03 PM PDT Don't they essentially do the same thing by storing all the data in a file and dumping it into another database? Is it that pg_dump just adds the data on to whatever is already there and restore deletes everything and then dumps it in? |
Increment and record creation based on field value [migrated] Posted: 18 Mar 2013 01:45 PM PDT Access 2003 Ultimately I need to create a report for printing labels "Sample ID, Jar x of x". My table has Sample ID and Number of Jars. Is it possible to create a query that gives me 3 fields: Sample ID, Jar Number, Number of Jars - where the Jar Number creates records in increments based on Number of Jars? The table information is: I want to avoid creating a table record for each jar. |
What does wait_resource LOG_MANAGER mean in SQL Server 2008? Posted: 18 Mar 2013 02:10 PM PDT Let's say we have two Stored Procedures A & B within the same database. Stored Procedure A:
Stored Procedure B:
A & B execute on the same database but DO NOT have any common tables. Yesterday, B was blocked by A and I found that the
What does this LOG_MANAGER mean? What kind of resource are they waiting on? Plus, does it have anything to do with with transactions within the 'A' SP. A quick Googling indicates that it has something do with growth of logs, but I would appreciate a more indepth explanation. |
Posted: 18 Mar 2013 11:50 AM PDT I have a table with a PK and a unique non-clustered index, as follows: I have 2 jobs whose execution times I find are overlapping each other. They both include the same From what I've been able to discern, the I'm at a loss as to why this is happening. Would this have anything to do with the I know I can mitigate the duplicate key error by setting |
how to set a row's value from a certain row's value? Posted: 18 Mar 2013 04:39 PM PDT I have a MySQL table named "activities" with fields id(int), tstamp(timestamp), amount(decimal) balance(decimal). Decimal fields hold money values. so balance fields' values must be like that: current row's balance = CHRONOLOGICALLY previous row's balance + current row's amount. Notice last row's tstamp value is smaller than first row's tstamp value. so when I say previous row I do not mean current id minus 1. So highest balance value must be at row #9. And the problem is how to update all balances with chronogically previous row's balance value + current row's amount value? |
big db for managing user files and settings Posted: 18 Mar 2013 02:06 PM PDT Hi im about to start a project that will hopefully eventually hit milion+ users which is some kind of a site managment, problem is how to set the db in mysql, some people suggest way 1 and others way 2, i hope you can help me decide the right path to start. needs to be stored:
my ideas to store the data are: path 1: have main db with table for usernames and passwords. create each user with db which inside have table for user files and table for user settings. path 2: have main db for system which inside have: table for users and passwords and 2 more tables for each user: user_files, user_settings. thank you. |
Find "n" consecutive free numbers from table Posted: 18 Mar 2013 07:41 PM PDT I have some table with numbers like this (status is either FREE or ASSIGNED) id_set number status ----------------------- 1 000002 FREE 1 000003 ASSIGNED 1 000004 FREE 1 000005 FREE 1 000006 ASSIGNED 1 000007 ASSIGNED 1 000008 FREE 1 000009 FREE 1 000010 FREE 1 000011 ASSIGNED 1 000012 ASSIGNED 1 000013 ASSIGNED 1 000014 FREE 1 000015 ASSIGNED and I need to find "n" consecutive numbers, so for n = 3, query would return 1 000008 FREE 1 000009 FREE 1 000010 FREE It should return only first possible group of each id_set (in fact, it would be executed only for id_set per query) I was checking WINDOW functions, tried some queries like I was thinking about creating virtual column using WINDOW functions counting preceding rows for every number where status = 'FREE', then select first number, where count is equal to my "n" number. Or maybe group numbers by status, but only from one ASSIGNED to another ASSIGNED and select only groups containing at least "n" numbers EDIT I found this query (and changed it a little bit) which produces groups of FREE/ASSIGNED numbers, but I would like to have all numbers from only first group which meets the condition |
How do you move a SQL Server 7 database to SQL Server 2012? Posted: 18 Mar 2013 03:01 PM PDT I would like to move a database from SQL Server 7.0 to SQL Server 2012. They are not compatible for a backup and restore. Is there a way to export from SQL Server 7.0 and import it to 2012? I only need the tables -- no views or stored procedures. |
Posted: 18 Mar 2013 12:58 PM PDT SOLVED. I was able to resolve this issue by altering the time.date column to allow 1000 stats: ALTER TABLE time ALTER COLUMN date SET STATISTICS 1000; Now the queries are running very fast, down to 300ms from 5s. I'll be looking into patching postgres as well. I have a table with 13.7 million rows. Up until recently, queries on this table have been quite fast. Suddenly they are slow. More specifically, the same query is fast when looking at a large date range, but crazy slow when looking at a small date range. Here are the two statements and their EXPLAIN ANALYZE output. I am using Postgresql 8.4.5 with default_statistics_target set to the default 100. It's obviously choosing a bad query plan for the shorter date range, but how do I resolve this? And how do I interpret the query plan? Fast query: And the SLOOOOOOOW query: UPDATE: Here is what the pg_stats table is telling me. It looks like increasing the number of values it can hold may help: |
How to generate TPC-DS query for sql-server from templates Posted: 18 Mar 2013 08:38 AM PDT I know that dsqgen is used to transform the query templates into executable SQL. But when I type in the command line: "dsqgen /directory query_templates /template query1.tpl /dialect sqlserver /scale 5", there's an error that "Substitution 'STATE' is used before being initialized". I also copied the ansi.tpl and tpcds.idx files in the same directory as dsqgen.exe because dsqgen always read these two files. Anyone has any idea how can I solve this? Thanks in advance. |
Date dimension changes between Test and Production Server Posted: 18 Mar 2013 08:32 AM PDT I have a cube deployed into a Test environment and Production environment. I've noticed something strange happen with the date dimension. When deployed into our test environment, the Date Key comes out looking like this: However, when deployed into Production it comes out looking like this: Both servers are running SQL Server 2012 and Analysis Services. On the test environment analysis services is installed on the same machine as SQL Server on the production one it is a different server. Apart from that they should be identical... I would like the DateKey to look like how it appears in the Test environment. I.e. 2012-12-01 00:00:00. |
Performance Counters for SQL Server Related to iSCSI Disk Access Posted: 18 Mar 2013 07:03 PM PDT I am planning to my SQL server databases (plus TLogs and TempDB) to a new LUN on our iSCSI SAN. The current LUN's used by these of these files are on their own two disk RAID 1 disk group and I'm going to a larger but shared 14 disk RAID 10 disk group. I want to measure the performance of the current configuration and the new configuration as I move each database over and ensure that I am not starting to hit any disk performance issues (or see if I am actually increasing the performance). There are a bunch of posts on the internet on SQL performance counters such as this one, but I am really interested in just the few that are related to network/disk usage and any latency or limits that associated with disk reads/writes. What are some of the important SQL or Windows performance counters that I should look at to create a current baseline/comparison for iSCSI disk access for SQL? |
Best relational database scheme for this data Posted: 18 Mar 2013 01:51 PM PDT I'm in the process of creating a database scheme for the following scenario:
Assuming no high-performance environment (no need to optimize for speed), what would be the best way to implement this schema? The database environment can be MySQL, MSSQL... it's more about the relational database design. I myself have come up with the following: The part I'm most uncertain about is of course the Applications_Permissions_Roles table. It is a linking table on top of another linking table. I've never used or seen this before. Another way to do it would be to replace it with a linking table between Roles and Permissions, and then use code or constrains to ensure the required relations... but that doesn't seem like a good solution to me. These things should be enforced on database-level if at all possible (and it seems possible), not on code-level. Secondly, is the link between Permissions.Application and Applications.Id required? I use it because there may not be any rows in Roles_Applications (such as when you've just added a new application) and then it's not possible to work out which permissions belong to which application. It also is a single point of reference to lookup to which application a permission belongs. I guess this is right, but it also makes a circle in the database design. MSSQL errors on it when trying to set ON_DELETE or ON_UPDATE to cascade. Any suggestions, or is this how it's supposed to be done? Any other suggestions regarding naming convention and such are also welcome by the way (perhaps as comment). Thanks, Edit: Changed title, hopefully making it clearer. The previous one was more comprehensive, but probably too convoluted. |
Accessing a SQL Server Denali CTP1 database with Management Studio 2012 SP1 Express Posted: 18 Mar 2013 08:47 AM PDT I had Management Studio 2011 Denali CTP1 installed before but I have uninstalled it. Now I have installed the Management Studio 2012 SP1 Express without any database instances. I have a webserver with a few SQL server instances, one of them was installed by the Denali CTP1 installer. I'm able to connect to it wothout problems with a Denali CTP 1 SSMS but not with the 2012SP1. When I connect with SSMS2012SP1 I immediately get the error:
Although I'm connected to the database (and able to create or modify users for example), nearly every item throws some error when I expand it. For example expanding "Databases" throws:
"Some authorization problem" - I thought. But after creating a new user and setting every possible right to it and trying to logging in with it, there is no difference. I'm not able to figure out the problem. I can access the database through code just fine, here is my connectionstring: So I figured that this is something Management Studio does. |
Database Design Relationship for Patient Visit in MS Access Posted: 18 Mar 2013 12:19 PM PDT I am designing a database to store records for research purposes, and I am having trouble picturing how I can create a relational database based on the information I have. There are a total of 6 different studies, and patients can be categorized under more than one study if they fit the criteria. Once they are under a study, they are required to pay routine visits for various type of exams (a total of 10 different kinds), i.e. clinical exams, physical exams, and other examinations to keep track of information. Each of these various exams must be time stamped to allow back tracing. In addition, patients are required to complete these exams up to three times while in the study, and each examinations has 10+ parameters that needs to be recorded as well. So lets say there are 100 patients, these patients are categorized into studies A, B, C, D, E, F, with the possibility of being in more than one studies at a time (i.e. patient_25 can be in studies A,B, and E). Each study requires 3-8 different patient visits to collect information from examination, and some studies require the examinations to be performed up to a total of 3 times over a period of time. What would be the best way to create tables that fit this kind of structure? Here's my thoughts so far, let me know if there is a better way. Tier 1. Studies Table: study_id(primary key), study_name, study_patient_code, patient_id(foreign key) Tier 2. Patient Demographics: patient_id(primary key), patient_name, patient_gender, patient_dob...etc. Tier 3. Visits Table: visit_id (primary key), patient_id (foreign key), visit_date, visit_type Tier 4. Physical Exam: physicalexam_id (primary key), visit_id (foreign key), pe_height, pe_weight, pe_score...etc. Clinical Exam: clinicalexam_id (primary key), visit_id (foreign key), parameters....etc Examination 3: exam3_id (primary key), visit_id (foreign key), parameters....etc Examination 4: exam4_id (primary key), visit_id (foreign key), parameters....etc ..... Examination 10: exam10_id (primary key), visit_id (foreign key), parameters....etc So basically, the study table will be linked to patient demographics table. the patients table will be linked to visits, which will then be linked to various examination tables with fields consisting of the parameters recorded on the date of visit. Is this a viable method of approaching this problem? Or are there a more elegant way of doing so? I hope this makes sense, if not, let me know and I'll be sure to elaborate more. UPDATE: I guess what I'm trying to ask is where should the parameters of each exam be stored? and where should the time stamp be stored? How should I organize the tables so that we normalize the database for each visit without excessive repetition of information for different exams and the 1st, 2nd, 3rd visits? |
Posted: 18 Mar 2013 02:00 PM PDT Scenario is that I have two servers running SQL Server 2008 R2 (64-bit), let's call them DB1 and DB2. I have set up database mirroring for all the databases on these servers so that DB1 is the principal and DB2 is the slave. Now the question is, can I set up database mirroring on a new server DB3 running it as principal and have DB2 work as a slave for both DB1 and DB3? Or will I need a separate new slave server (DB4) for DB3? Would this configuration be possible using separate instances on DB2? |
Issue with ROW_NUMBER and DISTINCT Posted: 18 Mar 2013 07:56 AM PDT I have following query I need to get a Table data Using the following SQL: Returns: Now, I know that I need to drop out the None of the |
Posted: 18 Mar 2013 10:43 AM PDT I apologize in advance if this is a repeat. I'm not really sure how to properly ask for what I'm running into. I have a large InnoDB table set up. I have 3 sources that all INSERT concurrently at a moderately high volume (300-500 INSERT/s). I have a PERL script running on a cron job every 5 minutes that aggregates data from the last time it ran. It keeps track of the last record it processed by storing the auto_increment value of the last row in a metadata table. The aggregates are always off. But not by much. This has been so frustrating because it's just plain simple math (SELECT account_id,sum(sold) GROUP BY account_id). I have a suspicion that it has something to do with the transaction isolation (repeatable-read). I recently found FlexViews which looks very cool and might address my problem. But I was wondering if anyone could:
I'm like 95% sure the auto_increment thing along with transaction commit ordering is hosing me up, but I don't understand enough about DBs to really ask the right question. Oh, one thing to note, I've already checked over the field types. This issues isn't the result of rounding. |
where can I get mysql-5.1.7.tar.gz or mysql-5.1.9.tar.gz? Posted: 18 Mar 2013 07:46 PM PDT I need mysql-5.1.7.tar.gz or mysql-5.1.9.tar.gz, but I can not find them anywhere, who can give me a link to find them, thank you very much!!! |
Can't backup transaction log in SQL Server 2012 Management Studio Posted: 18 Mar 2013 09:18 AM PDT Following this tutorial, I am opening I am using Any idea as to what I should do? |
SSIS SQL Server 2012 cluster Availibility Group Posted: 18 Mar 2013 09:43 AM PDT We have a three server cluster using AlwaysOn technology, and have created an Availability Group for the SSISDB catalog. How would you change the code to only execute on the primary server? |
Payment methods conceptual and logical model Posted: 18 Mar 2013 08:43 PM PDT I need to create a conceptual and logical (normalized) models of parking house according to the requirements below. It looks to me as a very simple concept that doesn't need all tables to have relationships - but then they could not be modelled as entities. I tried asking this on stackoverflow but got no feedback for couple of days now.
The problem is I don't know how to put those highlighted relations to the logical db model and whether event to put them there. Is it ok-practice to have isolated tables in the design? |
Posted: 18 Mar 2013 06:43 PM PDT I created this schema with OpenOffice. Does this design adhere to 3NF ? (A relationship link between Equipment and Supplier_Equipment , Customer and Membership should be created but OpenOffice isn't allowing me to do so.) |
Convert a Unix timestamp to a DATETIME in a View Posted: 18 Mar 2013 10:58 AM PDT I have a table that stores a unix timestamp. To query this as a date, I'm attempting to convert this timestamp to a datetime type in a view. Unfortunately, I can only seem to get the DATE portion out. This link describes how to do the conversion, but requires changing the Is there a way I can convert the timestamp, in sql, to a datetime without modifying the system? |
Find the first gap in an aggregation of integers in SQL Server Posted: 18 Mar 2013 03:51 PM PDT Let's say I have a table called dbo.GroupAssignment. GroupID | Rank ------------------ 1 1 1 2 1 3 2 1 2 3 2 4 3 2 3 4 3 5 The PK is GroupID, Rank. Normally, the ranks within a group are a contiguous sequence of integers starting from 1. But it's possible for a GroupAssignment to get removed, leaving a gap. When a new assignment is made for the group, I want to fill the first gap available. So, how could I calculate this in SQL Server? |
You are subscribed to email updates from Recent Questions - Database Administrators - Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment