[how to] PostgreSQL maintenance |
- PostgreSQL maintenance
- How can I get MySQL to accept both resume and resumé on a primary key?
- Replication fails because of a non-existent foreign key
- How to order by first record of another order by result?
- Possible to run multiple Reporting Services (SSRS) instances on different machines with databases on same machine?
- How to partition an Oracle table with 65 million rows?
- Sub-query in update function
- Setting up binary replication between two PostgreSQL instances
- isql*plus crashes in 9i when multiple user execute query simultaneously in oracle 9i
- Disk Space Considerations for Rebuilding a large Clustered Primary Key Index
- Meaning of SQLIO output "counts per second"
- Locking in "Read Committed Snapshot"
- How to Drop tables being a SYS user
- Backup / Export data from MySQL 5.5 attachments table keeps failing!
- Cant get postgresql running on ubuntu 12.04
- Full text search through many tables in MySQL
- Need help designing table with list of IDs to store
- table with item names as a composite key - joining with missing combinations
- Changing password doesn't work
- Difference between partitioning and fragmentation
- disk I/O error in SQLite
- SQL Server Import Job Removed Data After Job Failed
- Creating the MySQL slow query log file
- MariaDB, Triggers and last_insert_id
- Open source and commercial tools to build dynamic queries of fact tables in star schema
- Color syntax printing in SSMS 2012
- SSIS Script to split string into columns
- Impact of changing the DB compatibility level for a Published replicated DB from 90 to 100
- Running total with count?
- Best practices on common person fields (Name, email, address, gender etc...)
Posted: 09 Aug 2013 08:38 PM PDT I am new to PostgreSQL and want to know more about maintenance options in pgAdmin III. I've read the documentation and now I know that vacuuming frees some space and I need to run it regularly. Moreover, I need to update database statistics via ANALYSIS so query executor could select the best plan and I also need to REINDEX to regenerate indexes. The windows I use for that purposes is here: http://www.pgadmin.org/docs/dev/maintenance.html Some questions regarding the behavior and options:
|
How can I get MySQL to accept both resume and resumé on a primary key? Posted: 09 Aug 2013 06:29 PM PDT If I add both then the uniqueness constraints fail, even though my field is in UTF-8. What can be done? I'm running 5.5.16 |
Replication fails because of a non-existent foreign key Posted: 09 Aug 2013 06:46 PM PDT I have been trying to get a previously working replication to start working. I have tried many settings, but I can't seem to make it work. Here is the scenario. I dropped all the FK constraints from the destination database (call it DestDB). I then re-initalize my replication and when I Runs I get this (de-identified) error:
It is driving me nuts because the FK indicated does not exist in that database. Now, I did copy this database (backup and restore) from another database. So the only thing I can think of is that it is somehow crossing the streams. But that seems unlikely. Any ideas would be great! NOTE: This is the query I ran to ensure that I have no FKs (It returned no rows): I also checked to ensure that the FK in question was not on the source table (in the database I am replicating from). I have also tried: Also tried Dropping I tried creating an empty database as a subscriber. That worked fine, so it is something in the DestDB that is causing the issue, but the FK in the ERROR does not exist. |
How to order by first record of another order by result? Posted: 09 Aug 2013 12:54 PM PDT I am trying to create my own messaging like with SMS in phones. I have a MySQL table like this And I want to get the information in a special ordered way. First it has to be sorted by the id that's not you (call it 'other' id). Then for each section of that ordering, it needs get the top most record (which should be the most recent date), and then sort the sections by the date value of that record. I can do the first ordering by this: where the But the problem is how to do the second ordering, where I need to order the sections by the date of the top record (which should be most recent date). Note when I say section, I mean the group of records with the same 'other' id. Does anyone know how to do this? Thanks. EDIT: Example If my id is 5: So the july 28 date should come on top, because it happened the most recently even though there are dates in that section that are before july 26. So when I do the first ordering, it orders by the other id, which would be the 6, and then 2. The ones with 6 is one section, and the ones with 2 is the second section. |
Posted: 09 Aug 2013 12:53 PM PDT With SQL Server 2012, can SSRS instances be setup on two different machines, with both instances using the same machine for their SQL Server databases? Machine 1
Machine 2
There's this document describing installing multiple instances in 2005, but on the same machine. Can you point me to a better resource for our design? Is this still supported on 2012? How are the SSRS databases (ReportServer, ReportServerTempDB) created for the 2nd instance? |
How to partition an Oracle table with 65 million rows? Posted: 09 Aug 2013 03:17 PM PDT The table that needs partitioning contains a list of Components and their Values. So one table is tblComponents. Another is tblFields. Then this table, tblValues, contains the values of the Components/Fields, so it has a Component foreign key as well as a Field foreign key. Right now this tblValues table contains 68,000,000 values. Components table contains ~80,000 rows. Each component has around 500-800 fields of inventory. I was thinking about creating a partition by hash on the field id. As the user will query, they will pick something like.. ."Show me the components where field id 5 = '03530'" Something along those lines. Sample queries are below that are common: Also, the values are currently stored as CLOB's and can have numeric values or string values. Any tips regarding how you would partition or index this would be great, especially for optimizing querying. |
Posted: 09 Aug 2013 11:30 AM PDT I have a really terrible set of data that I've inherited and part of my job is going through and cleaning it up to something that's actually usable. Currently, I'm trying to update a column in my table based on the name of entry. Table: (category is currently null for almost everything) My query: The problem is, no matter whatever I try I keep getting the error:
I'm not sure what other way this could be done. Any ideas on how to replace the from statement with something else? |
Setting up binary replication between two PostgreSQL instances Posted: 09 Aug 2013 10:47 AM PDT I have two virtual machines: And so, I set the appropriate settings of and and and Each database is empty, but the following command was run on both: Now, since there are no databases, I don't need to worry if they are out of sync (since there is no data to be out of sync). I start the master: and I start the standby: Examining the log files of each:
but when I double-check What else could be going wrong? |
isql*plus crashes in 9i when multiple user execute query simultaneously in oracle 9i Posted: 09 Aug 2013 06:18 PM PDT In my workplace Multiple users are using isql*plus(oracle 9i) to access single database through server url .But after some time isql*plus crashes and its services needs to started. Its causes a lot of time waste and is also decreasing the productivity please help. Sever is a desktop with 2 GB of RAM and 2.5Ghz AMD processor No of user 25.
clients make request using |
Disk Space Considerations for Rebuilding a large Clustered Primary Key Index Posted: 09 Aug 2013 01:09 PM PDT We have a table in our database with over 200 million rows and a Clustered Primary Key Index which is over 100GB. The Primary Key is only so large because many more millions of rows have been archived from the table and the Index has not been rebuilt since. By rebuilding I'm hoping to reduce the size of the index considerably and free up a lot of space in our database. The problem we have is the disk only has 20GB of free space and I'm worried how much will be temporarily required when rebuilding the index. What is the best way to do this? Do I bite the bullet, rebuild the index and hope there is enough space? Or do I drop the PK and rebuild it from scratch? Archiving Details DB, Table and Index Details Data file free space: 5,991 MB / 151,000 MB Log file free space: 2275 MB / 25000 MB tempdb free space: 7476 MB / 19232 MB There is only one index on the table, which is the clustered primary key: Key Columns: 1x bigint We're using SQL Server 2008 R2 Standard Edition. |
Meaning of SQLIO output "counts per second" Posted: 09 Aug 2013 09:05 AM PDT I realise that my question isn't strictly 'database' but I know that a lot of DBA's will use the SQLIO tool to check I/O performance on their systems so I'm hoping that somebody here will be able to answer this question: When I run SQLIO the 3rd line of output says something like: using system counter for latency timings, 14318180 counts per second What does that mean? I get different values on different systems, and because I'm comparing different systems I would like to know what that number is telling me (if anything). |
Locking in "Read Committed Snapshot" Posted: 09 Aug 2013 02:44 PM PDT IF update command is run on a table with "Read Committed_Snapshot" isolation level and Commit is pending eg: 1) update table1 set col1 = col1 + 1 where PKcol < 3 2) update table1 set col1 = col1 + 1 where PKcol = 3 3) update table1 set col1 = col1 + 1 where NonPKcol < 3 4) update table1 set col1 = col1 + 1 where NonPKcol = 3 5) update table1 set col1 = col1 + 1 where PKcol < 3 and NonPKcol = 5 (In above case PKcol is primary key in table and NonPKcol is a non-primary key) then whether Update is locked for only rows satisfying 'where' condition ? (is it based on value or index or Primary column ?) |
How to Drop tables being a SYS user Posted: 09 Aug 2013 12:34 PM PDT I am not being able to drop my tables using : command. It says "the table is referencing primary keys of some other tables as foreign keys." , and when I tried to drop foreign key constraints using : statement. It says: "cannot drop constraint", and when I tried to drop column itself on which foreign key constraint was applied, using: statement. It says: "cannot drop column owned by SYS table.". Somebody told me, SYS doesn't has the privileges to drop columns especially when foreign keys constraint are present, and he suggested me to make new User account apart from default SYS and customise all the privileges. Making new user is very lengthy process and I read many articles of making a new user already, but it doesn't help , coz, I've done the installation by choosing the option of "install database software only" and then later created and configured Database using DBCA(Database configuration Assistant ). I might have skipped certain steps or didn't look it properly while using DBCA and just kept the default settings. Creating a new user requires you to use the CREATE USER privilege and the syntax given in the oracle documentation to create user is : This is the example I copied from the oracle documentation. Now the problem I am facing is with the PROFILE keyword in the above statement. I don't know what value to fill for PROFILE keyword and moreover the documentation suggests : "Oracle recommends that you use the Database Resource Manager rather SQL profiles to establish database resource limits. The Database Resource Manager offers a more flexible means of managing and tracking resource use." Now, when I started looking for Database Resource Manager, I came accross various steps such as: To administer the Resource Manager in Enterprise Manager: 1) Access the Database Home page. and it says: The Database Home page is the main database management page in Oracle Enterprise Manager Database Control (Database Control). It asks to confirm certain steps to access the "DataBase Home Page" such as : 1.) Ensure that the dbconsole process is running on the database host computer. I checked the dbconsole process by running the following statement on the command prompt : It showed a warning on the command prompt saying : "OC4J configuration issue . C:\oracle_base\product\11.2.0\dbhome_1\oc4j\j2ee\OC4J_DBConsole_rubbalbhusri-PC_orcl not found." Note : rubbalbhusri-PC is my system-name(the name I defined for my Operating System) Now, my question is this : Is "OC4J_DBConsole_rubbalbhusri-PC_orcl" directory created during installation itself ? Why, I don't have one( "OC4J_DBConsole_rubbalbhusri-PC_orcl" directory ) ? What should I do to have one( "OC4J_DBConsole_rubbalbhusri-PC_orcl" directory ) ? Please don't tell me to install the whole "oracle database 11g" again. Tell me some repair, which solves this problem, if any ? |
Backup / Export data from MySQL 5.5 attachments table keeps failing! Posted: 09 Aug 2013 10:37 AM PDT Can anyone please help! - I have a large table in a MySQL 5.5 database. It is a table which holds a mixture of blobs/binary data and just data rows with links to file paths. It has just over a million rows. I am having desperate problems in getting the data out of this table to migrate it to another server. I have tried all sorts - mysqldump (with and without -quick), dumping the results of a query via the command line. Using a MySQL admin tool (Navicat) to open and export the data to file, CSV, or do a data transfer (line by line) to another DB and/or another server but all to no avail. When trying to use the DB admin tool (Navicat), it gets to approx 250k records and then fails with an "Out of memory" error. I am not able to get any error messages from the other processes I have tried, but they seem to fall over at approximately the same number of records. I have tried playing with the MySQL memory variables (buffer size, log file size, etc) and this does seem to have an effect on where the export stops (currently I have actually made it worse). Also - max_allowed_packet is set to something ridiculously large as I am aware this can be a problem too. I am really shooting in the dark, and I keep going round and round trying the same things and getting no further. Can anyone give me any specific guidance, or recommend perhaps any tools which I might be able to use to extract this data out?? Thanks in hope and advance! A little more information below - following some questions and advice: The size of the table I am trying to dump - it is difficult to say, but the sql dump gets to 27gb when the mysqldump dies. It could be approximately 4 times that in total. I have tried running the following mysqldump command: And this gives the error:
The server has 8gb RAM, Some of the relevant settings copied below. It is an INNODB database/table. |
Cant get postgresql running on ubuntu 12.04 Posted: 09 Aug 2013 01:09 PM PDT I ran sudo apt-get install postgresql on ubuntu 12.04. I didnt get a postgresql.config, instead I got a postgresql.config.sample and pg_hba.conf.sample. I renamed these (they installed in usr/share/postgresql) and uncommented listen address to localhost and the port to 5432. But still it doesnt seem to start? I get could not connect to server: Connection refused Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5432? in my webapp log (no log from postgresql). I have no clue what Im doing could someone please help me? I followed all the tutorials I found on the internet (I re-installed my server 3 times). But I cant get it to work. Thanks. |
Full text search through many tables in MySQL Posted: 09 Aug 2013 01:50 PM PDT We have high traffic NEWS websites, I want to add a feature that every user can search through over all content of site, such as I decided to create a table that holds all of contents from all types: I generate a unique number as for example: As you can see QUESTIONS
|
Need help designing table with list of IDs to store Posted: 09 Aug 2013 02:49 PM PDT I have a table that I need to create to hold saved emails for FUTURE delivery (to, from, message, scheduled send date, etc). The catch here is that I don't know who's supposed to receive the email until the day of delivery. What I mean is, the email is created to go to certain selected organizations, but the "members" of the organization will be constantly joining and leaving, so if someone creates an email for delivery next month, the list of member email addresses to send to by then will be different. So, what I need to save in the table is the list of organizations that the email should go to, so that I can query for the latest list of member email addresses when I actually send the email. Hope that makes sense. Anyway, so my question is: what is considered a "proper design" for this? My initial thought is to just save a comma delimited list of organization IDs. I know I will never have to search on which organizations were on the list, so I don't care if it's not query-able, and I know I could normalize it into one row per recipient organization, but it seems such an unnecessary repeat of data for no purpose, especially since I only query on the SENDER not the recipients. So is a list of IDs just a horrible, no good, only-a-newbie-would-think-of-that, bad thing? Or can it be used in some cases? Or is there some other way to do this that I don't know about? I'm sure I can't be the only one who's run into a situation like this before! |
table with item names as a composite key - joining with missing combinations Posted: 09 Aug 2013 04:09 PM PDT I've got a table design with many levels of item, and each has a name that's inside a separate table with a composite key linking to a language table. i.e. Country, City, Street, Building, Floor, Suite. Each with the 1 to many relation. if a country has two languages (fr + en) the database will be polled for both, and expecting a value in both languages whether or not it exists. I'd like to create a query (a view ideally) that can combine all these rows. The problem I keep hitting is that I either get all variations of languages/names across each item, or i lose entries that do not have that language value and any subsequent(child) items. etc... what I have done is created a table that links all levels (country,city,building, etc.) only the cityname(which determines if there is an alternate name. then I do a join the tables in a view and get all the names as-well. I've added a line that adds the alternate value which is something like this it works ok, but I'm it's getting a bit messy and slow. Using views, this requires each item to have it's separate name view, which relies on recreating the hierarchy to check which languages are needed. the other problem related to this, is that I have a location Table that can have multiple items used. i.e. assuming that the location is restricted to being on a floor, or in a suite.
if i have both filled out I can use the suiteName view(suite with atlernate names) which already has the floorName, but if I only provided the floorID, then I can't use the same view, but have to use two separate ones. Update this view [CitiesBuidingsFloorsSuites]: shows all the Cities, Buildings, Floors and Suites. #1 I'd like to extend this, so that I have a view where every item has a name for every language that is defined in CityLanguages. (if a city doesn't have a language, no other linked tables(building,floor,suite,object) need that language) #2 essentially the same for the Objects (get all names in all languages where combination exists in citylanguages), the one difference is that the suite name is not mandatory, because ObjectLocations can have a NULL suiteID. I've been able to get this working in a way that I'm not too happy with. I'm using too many views to get to this point, and because I use the last-level view to filter a city or building, the views that are used to generate the final view do a lot of redundant compiling. I hope this clears it up |
Changing password doesn't work Posted: 09 Aug 2013 03:58 PM PDT I need to login to a database. I have good credentials, but when I was trying to login with SQL Developer, I got this error message:
Then I tried to login with SQL*Plus, which worked fine, I connected successfully and I was asked to change my password. So I typed in a new password and then retyped it. Then I got other message:
Can you help me how can I change this password? I don't have administrative permissions. |
Difference between partitioning and fragmentation Posted: 09 Aug 2013 12:58 PM PDT I'm trying to learn something about distributed databases and I stumbled upon the concepts of fragmentation and partitioning. It seems to me as if both do the same thing and produce similar results (deviding a relation horizontally/vertically/both). So what's the difference between fragmentation and partitioning? |
Posted: 09 Aug 2013 01:58 PM PDT What are the possible things that would trigger the "disk I/O error"? I've been having this problem and I couldn't find a solution. I have a SQLite3 database, and I'm trying to insert data from a file that contains SQL inserts. Sample data in the file: I tried inserting that in the db file with the following command: See below the error that I get: The input lines that don't generate error are successfully included, but I don't understand why some lines have errors, and they are not inserted into the DB. There's nothing special in the lines with error, and if I run the command again I get errors in different lines, which means it's random (not related to the data itself). I tried adding |
SQL Server Import Job Removed Data After Job Failed Posted: 09 Aug 2013 10:58 AM PDT I have a SQL Server Job set up to import data from a table to a table on another server, using the Import/Export data wizard. In one instance, the job failed (with, I believe a SQL connection issue), and with that, the data from both tables were deleted. The tables on both the source and destination server were removed. Does this make sense? If anything, why would the data be removed from the source? The source is SQL Server 2012 and the destination is SQL Server 2008. |
Creating the MySQL slow query log file Posted: 09 Aug 2013 11:58 AM PDT What do I need to do to generate the slow logs file in MySQL? I did: What more do I need to do to? |
MariaDB, Triggers and last_insert_id Posted: 09 Aug 2013 03:15 PM PDT I have a situation where I have three tables: sample, sample_name and run (extra columns removed to be only relevant information). Now I would like to check if a sample (based on sample_name) exists or if not, then insert both and use the sample_id to insert run entry as well. To do this I planned first to check if the unique name (name + project) exists and if so use it. However if it doesn't, then use trigger to do double insertion (I have a perl script to do this logic; I tried also the commented update version). So when sample name does not exists, I try to insert a record with null sample_id and thus the trigger reacts. However this doesn't work and I get a following error. Can someone suggest on how to fix this or make it work. The database is actually MariaDB 5.2.12. Thanks, |
Open source and commercial tools to build dynamic queries of fact tables in star schema Posted: 09 Aug 2013 02:57 PM PDT What user interfaces or tools (commercial and/or open source) can query a star schema model (Ralph Kimball dimensional model)? So for example, if we have X fact tables, and Y dimension tables, and Z cross reference tables, we'd like to query a random fact table. If you are aware of such tools, can you provide a reference of each. |
Color syntax printing in SSMS 2012 Posted: 09 Aug 2013 04:41 PM PDT It seems the SSMS of Sql server 2012 cannot print the SQL statements with syntax color anymore. Is it a bug? Does it have any workaround? |
SSIS Script to split string into columns Posted: 09 Aug 2013 02:58 PM PDT I have a dataset (log file) with a number of columns; one of them is "Other-Data" below (unordered string) and need to parse the string to create the derived columns according the u value (U1, U2, U3, etc...). The output columns should be something like: Other-Data: Can anyone help with this? |
Impact of changing the DB compatibility level for a Published replicated DB from 90 to 100 Posted: 09 Aug 2013 09:58 AM PDT I have a SQL Server 2008 R2 server with a bunch of published databases that are currently operating under compatibility level 90 (2005). The subscription databases are also SQL Server 2008 R2, however the destination databases are set to compatibility level 100 and replication is working fine. If I change the compatibility level for the Published databases, will it affect replication in any way, or will it just be a case of reinitializing all the subscriptions and restarting replication? I suspect that changing the published database compatibility level may change how the replication stored procedures function slightly, but I'm not 100% sure. Is this the case? |
Posted: 09 Aug 2013 12:16 PM PDT As the title suggests I need some help getting a running total in T-SQL. The problem is that the sum I need to do is the sum of a count: Say if I ran the count alone, the result would be: I need output with the sum to be : I've done running totals before using the |
Best practices on common person fields (Name, email, address, gender etc...) Posted: 09 Aug 2013 03:13 PM PDT What are the most common best practices on length and data type on common fields like:
etc.... |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment