[how to] What can I do to repair a corrupted partitioned table on the slave? |
- What can I do to repair a corrupted partitioned table on the slave?
- PostgreSQL : Restrict (another) superuser from accessing database
- What is a good way to replicate code across multiple databases?
- Polling for new records in a table
- InnoDB log sequence number is in the future
- Negative number of rows inserted
- Is it possible to keep track of changes to tables structure?
- Locking and Faulting on import
- Whay makes this rare error: "Can't connect to local MySQL server through socket '/tmp/mysql.sock' (146)"
- MySQL stored procedure create database by month
- optimizing MySQL for traffic analytics system
- How do I migrate varbinary data to Netezza?
- SQL Server not using available memory
- Create Table If Not Exists Hanging
- SQLCMD Exporting 16.5 Million Rows
- Order table Database design
- Unable to generate a MySQL query
- SqlPackage does not pick up variables from profile
- Database Mail sending functionality not working on local system
- Is there a standard way to use common data and procedures across multiple databases?
- createdb: could not connect to database postgres: FATAL: could not write init file
- How to load the Adventureworks database into Oracle?
- Repeated values in group_concat
- MySQL PDO Cannot assign requested address
- Performing SELECT on EACH ROW in CTE or Nested QUERY?
- "Could not find stored procedure' even though the stored procedure have been created in MS SQL Server Management Studio
- Stored Procedures Best Practices: Fenced or Unfenced?
What can I do to repair a corrupted partitioned table on the slave? Posted: 11 Jul 2013 07:59 PM PDT
The slave has been stopped due to power outage: This can be fixed by running:
Then I got another error: I guess that it's because the Is there any way to repair this table and make the slave thread continues to run instead of start from beginning? |
PostgreSQL : Restrict (another) superuser from accessing database Posted: 11 Jul 2013 07:41 PM PDT I'm new in PostgreSQL and looking to find if this is possible. There are two superusers, 'postgres' (which is default superuser from PostgreSQL) and 'super2'. Superuser 'super2' creates a new database named 'Movies' and being the owner of it. Is it possible to restrict superuser 'postgres' to access the database 'Movies' so that only superuser 'super2' could access it, because he is the owner of the database. If it is possible, then how to make it happen? |
What is a good way to replicate code across multiple databases? Posted: 11 Jul 2013 08:24 PM PDT SQL Server 2005 I have two databases (Source1 and Source2) with the same data model and I plan to merge the data into a third database (Target) with an "abstracted" data model. In the future, it's possible to have additional Source databases that will need to be merged into Target. Originally, I was hoping to build views and stored procedures that would reside in the Target database that I could dynamically run again Source1 to load Target then switch to Source2. That doesn't seem to be possible without dynamic SQL. So plan B is to build the views and procedures in the Target database then deploy them to Source1 and Source2. What is the best way to replicate code to multiple databases? I'm open to any new ideas that address this situation. |
Polling for new records in a table Posted: 11 Jul 2013 04:06 PM PDT We have a table that is append+read only. We'd like to have multiple applications consume all new rows that show up in that database, in a cheap and easy way (without change tracking). The simple suggestion of having each app record the last ID doesn't seem safe. Suppose 3 connections (2 writers and 1 readers) do this: In this case, the reader would get records 1001-2000 and save 2000 as the highest ID. Obviously that's not correct. Assuming the various writers are running at varying speeds (so we can never assume they'll commit in order, due to latency or perf), what options are there? Must we resort to something like change tracking, or require every application to save the last (batch*writers*x) IDs and re-read them to make sure they didn't miss any? Rollback and other gaps in the sequence (MSSQL can jump the IDs by 1000; pretty sure other DBs do that under certain situations) make it complicated to try to guess if rows were skipped. Plus the batch inserts might not be a proper batch; they could consist of multiple INSERT statements, meaning each tx might not even have a contiguous block of IDs assigned. We're currently using MSSQL 2012 if there's a hack specific to that system, but I'm more interested in general. |
InnoDB log sequence number is in the future Posted: 11 Jul 2013 08:31 PM PDT I've never seen or had this error before. The client does not have backups so they need a recovery done. I've done MySAIM recovery before but never InnoDB. Is there any tips on how to do it correctly? From what I've read up on I need to increase a single counter ("log sequence number") which is stored somewhere in the headers of ib_logfile0 and ib_logfile1 or something like that correct? |
Negative number of rows inserted Posted: 11 Jul 2013 01:59 PM PDT I just ran a rather simple insert statement (that I think processed alot of rows), and I got the message: "-1,311,750,183 rows inserted". Should I be concerned about the fact that it's negative? |
Is it possible to keep track of changes to tables structure? Posted: 11 Jul 2013 12:50 PM PDT Is it somehow possible to keep track of changes made to the STRUCTURE (not the data) made to tables? I mean, i don't want to know when rows are inserted/updated/deleted, what i want is to know when the table definition is been changed (and what those changes were). E.g. to know if/when a column was added, removed, if a column type was changed, if an index was added, etc. In other words, to keep track of all the "ALTERS" of the tables. (and the state before those alters) I'd like this to know exactly when some changes were made, and to keep track of discrepancies of data, for example to know when a column was changed to hold So is there a way to do this? I am using MySQL, version 5.1.66. And, if it is not possible in MySQL, is it in other Databases? Thanks |
Locking and Faulting on import Posted: 11 Jul 2013 01:20 PM PDT My collection with a _id and a single field index is run on a single node/on localhost, has 32GB RAM and 16 core CPU available. The hdd is in RAID1, without SSD cache. Mongo version 2.4. I run a mongoimport with 5Mio documents dump with a total of 5GB with mongoimport. It starts fast and slows down very quickly. This starts fast with about 11000/second. After 3 minutes going down to 9000/s, again 1 minute later it is 7000/s and after some more minutes I we are in slow motion on 200/s. The mongoimport takes for hours instead of some minutes. mongostat shows increasing LOCKs and faults: I run mongodb without How to improve this and remain the speed of bulk INSERTs constantly? |
Posted: 11 Jul 2013 02:16 PM PDT This is a rare error that occurs on Solaris, have seen it from Perl looking like this.
Outputs
It is very rare for this error to occur. The global variable Can you enumerate the possible or likely causes of this error happening only occasionally? |
MySQL stored procedure create database by month Posted: 11 Jul 2013 12:29 PM PDT I'm a systems admin, and I'm not a dba, I'm wondering if I could use MySQL stored procedure to Query main database and move records by month to another database located on the same system. This way we can maintain database small main database, and have data moved to individual database by month. Thank you |
optimizing MySQL for traffic analytics system Posted: 11 Jul 2013 12:18 PM PDT background : and for now , just the The system should represent click analytics in the time periods the user wants, for example past hour, past 24 hours , the past month , ... for example to generate graphs for past month , I do following queries: issues:
questions : 2- is MySQL suitable for this application? assume at maximum my application should handle 100 million links and 10 billion clicks on them totally. Should I consider switching to an NoSQL solution for example? 3- if MySQL is ok , is my database design and table structure proper and well designed for my application needs? or you have better recommendations and suggestions? Thanks for your help |
How do I migrate varbinary data to Netezza? Posted: 11 Jul 2013 12:48 PM PDT I got a warning message while migrating DDL from SQL Server to Netezza:
I'm wondering whether this kind of data conversion will cause some issues such as truncation of data etc.? |
SQL Server not using available memory Posted: 11 Jul 2013 10:35 AM PDT SQL 2008R2 Microsoft SQL Server Standard Edition (64-bit) Server has over 300 GB of memory but total memory use in control panel never goes over 86 GB Dedicated SQL Server Set the service account to hold locks in memory Is that normal? |
Create Table If Not Exists Hanging Posted: 11 Jul 2013 12:58 PM PDT I am still new to using databases, but currently I am using a mysql database and attempting to make my bash script unbreakable because it might be used in several different environments (dev, qa, etc). I know it will run correctly without this one line. I also know that if I put that exact line into my database when that table does not exist then it runs perfectly and if it does exist already it gives me a warning, but does not break. When I run the line above and there is no table named backupfiles it works perfectly, but when there is it hangs(when I say hangs it sits there and does nothing) 95% of the time and 5% of the time works. Has anyone run into a problem like this? I am using AWS RDS(Amazon Web serices Relational Database Service) and the mysql server is 5.5.27 Here is all of my code that I relates to the mysql database |
SQLCMD Exporting 16.5 Million Rows Posted: 11 Jul 2013 11:30 AM PDT I am exporting a table to a TSV file using sqlcmd, and I am running into issues. The table has 16+ million rows and about 55 columns. The problem is that it does not export the full table, but seems to stop randomly at various points (i am guessing a timeout?) Each time a different number of rows are exported and each time the file is of a slightly different size (indicating that I am not hitting any row or size limit). I am not using any timeout switch (meaning the default of "as long as it takes" is used). Here is my command (with most columns removed for simplification and illustration purposes): I wonder if it could have something to do with timeouts or the use of ISNULL() on all of the columns (although when i run the query in sql server management studio I get the correct number of rows returned e.g. 16 million + )? Again, I get about 4-8 million rows each time, but never the full amount. I am in a sql server 2k5 db, and running sqlcmd from a remote machine with sql server 2k8. |
Posted: 11 Jul 2013 08:05 PM PDT I'm in need of some help with the database design of order table of a shopping cart for online food ordering system. I have a design below. Tables about which I'm Concerned are order_details and order. Basically what is going to happen is user selects foods from restaurant menus and will place order, User details are stored in the user table (which is not shown in the image). This is my first attempt, I was never good at Database. So please take it easy if you fill that the question is way too low for this site. And my questions are:
|
Unable to generate a MySQL query Posted: 11 Jul 2013 12:21 PM PDT I have a table in a MySQL database as below: I want to generate a report as shown below using the above table. The query that I had generated is not able to showcase the switching taking place. |
SqlPackage does not pick up variables from profile Posted: 11 Jul 2013 05:08 PM PDT I want to upgrade a database using .dacpac and sqlpackage.exe here is how I run sqlpackage: The error I get is: * The following SqlCmd variables are not defined in the target scripts: foo. I have verified that myprofile.publish.xml file does contain that var: I also verified that project that creates dacpac does publish successfully from within visual studio using What else could I be missing? (I'm using SQL Server 2012) |
Database Mail sending functionality not working on local system Posted: 11 Jul 2013 12:54 PM PDT I am using Database Mail functionality to send mail from a SQL Server 2008 database via following stored procedure execution: I have tried with my gmail account profile on my local system it's working properly but not with my company or outlook profile. Error message:
Reference What would be the problem? Thanks |
Is there a standard way to use common data and procedures across multiple databases? Posted: 11 Jul 2013 01:52 PM PDT We have several independent databases that have data and code in common, not in the sense that it is accessed between the databases but in the sense that the data means the same thing in each database, and the code does the same thing. Examples are:
For maintenance and support reasons I think it makes sense for things like error codes, procedures, functions, and types to have a "single point of truth" across all the databases, not a different truth in each database. At the moment each database has it's own copy of everything, including source repository, and we maintain them all independently. This is far from ideal because it's too easy to fix a procedure in A and forget to put it in B, or add an error code to A and the same one to B but they mean different things, etc. The databases are not updated at the same time and they don't necessarily reside on the same hardware. There are cases where they can read data from each other (if the other one exists). Is there a standard way of having a single point of truth for data/code that is used across more than one database? |
createdb: could not connect to database postgres: FATAL: could not write init file Posted: 11 Jul 2013 07:54 PM PDT RedHat Enterprise Server 3.0 32 Bits psql (PostgreSQL) 8.2.3 user: postgres server is running: I had just created a new database cluster with initdb; but when I run createdb: any clues as to the cause and possible solutions to this problem? |
How to load the Adventureworks database into Oracle? Posted: 11 Jul 2013 01:09 PM PDT I am trying to work through some of the examples at sqlzoo and I would like to load the example adventureworks example database into an Oracle database at home. Google search has not turned up any sql scripts that have been converted to Oracle. Does anyone have or know where I can get such scripts. |
Repeated values in group_concat Posted: 11 Jul 2013 03:54 PM PDT I have two tables, first the table food and Second is Activity: For now I'm using the following query: Could you please help me, I need output in the below format: |
MySQL PDO Cannot assign requested address Posted: 11 Jul 2013 01:54 PM PDT Can someone help me with this error? I have a Server with a lot connections per second; out of about 100 Connections, a single one got this error. I've tried this recommendation from stackoverflow however it does not solve my problem. |
Performing SELECT on EACH ROW in CTE or Nested QUERY? Posted: 11 Jul 2013 06:54 PM PDT This is a problem in PostgreSQL I have a table which stores the tree of users; +------+---------+ | id | parent | |------+---------| | 1 | 0 | |------|---------| | 2 | 1 | |------|---------| | 3 | 1 | |------|---------| | 4 | 2 | |------|---------| | 5 | 2 | |------|---------| | 6 | 4 | |------|---------| | 7 | 6 | |------|---------| | 8 | 6 | +------+---------+ I can query a complete tree from any node by using the connectby function, and I can separately query the size of tree in terms of total nodes in it, for example
Now I want to do something like Selecting all possible trees from this table (which is again carried out by connectby), count the size of it and create another dataset with records of ID and size of underlying tree, like this: +------------------+-------------+ | tree_root_node | tree_size | |------------------+-------------| | 1 | 7 | |------------------+-------------| | 2 | 3 | |------------------+-------------| | 3 | 0 | |------------------+-------------| | 4 | 3 | |------------------+-------------| | 5 | 0 | |------------------+-------------| | 6 | 2 | |------------------+-------------| | 7 | 0 | |------------------+-------------| | 8 | 0 | +------------------+-------------+ The problem is, I am unable to perform the same SELECT statement for every available row in original table in order to fetch the tree and calculate the size, and even if I could, I dont know how to create a separate dataset using the fetched and calculated data. I am not sure if this could be simple use of some functions available in Postgres or I'd have to write a function for it or simply I dont know what exactly is this kind of query is called but googling for hours and searching for another hour over here at dba.stackexchange returned nothing. Can someone please point to right direction ? |
Posted: 11 Jul 2013 08:21 PM PDT I have created a table which I used the Microsoft SQL Server 2008 Management Studio. I created a stored procedure and are able to view the Stored Procedures on the I find it very strange when I receive the following error: when I execute the following SQL statement: What could it be missing? |
Stored Procedures Best Practices: Fenced or Unfenced? Posted: 11 Jul 2013 11:58 AM PDT I believe I understand the reasons behind fenced and unfenced stored procedures. Fenced run "outside" of the database (in our case DB2) so as to prevent possible corruption of the database engine should there be issues with things like pointers. Unfenced runs "inside" of the database, which means that performance is better. From what I have also researched, SQL PL is always basically unfenced, because it is SQL and therefore cannot access memory like programming languages can. C/C++ and Java procedures can run fenced or unfenced. But since they can possibly access memory, there should be a consideration for running them fenced, unless there is a certainty on the quality of the code to not crash and it needs performance. First of all, am I correct in my understand of the above? Next, is it generally a best practice to start out with all stored procedures (even those defined as SQL PL) as fenced first? Any other best practices for stored procedures, especially as related to fencing and/or security?
|
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment