[how to] Why aren't my first few Postgres WAL segments being archived? |
- Why aren't my first few Postgres WAL segments being archived?
- Cannot start sdows.ear module on WebLogic 12c
- In Job Step > Properties, only see two tabs: Package and Configuration
- Update existing SQL Server 2008 table via Flat File using SSIS
- How to gracefully handle a source database restore from backup in a data warehouse
- Mitigating MySQL replication delay in single master/multiple slave setup
- My MySQL server is charged? [closed]
- How to calculate sigma > 1 in MySQL
- Changing the representation of NULLs in pg_dump plaintext output
- Formatting T-SQL in SSMS 2012
- Query for master in Postgres replication
- Replication Options with data subsets
- Issues starting Postgres server [closed]
- Copy a SQL Server database from one server to another on demand
- Migration to databases Domain Users do not have access
- Creating a global temp table in MySQL
- MongoDB problems recovering a member of the replica set
- MySQL server crashed.
- SQL Server replication conflicts after migration from 2000 to 2008
- How to connect to a Database made by Oracle SQL Database?
- Backup plan for MySQL NDB cluster databse not innoDB
- Easiest Way to Move 100 Databases
- How to optimize a log process in MySQL?
- MySQL: Lock wait timeout exceeded
- postgresql: how do I dump and restore roles for a cluster?
- Converting RTF in a text column to plain text in bulk
- Designing a database for a site that stores content from multiple services?
- How to free up disk space? which logs/directories to clean? [ORACLE 11g]
Why aren't my first few Postgres WAL segments being archived? Posted: 21 Jun 2013 09:03 PM PDT The The When I first turned on archiving Relevant settings: Note: I did change all of these settings (including |
Cannot start sdows.ear module on WebLogic 12c Posted: 21 Jun 2013 07:52 PM PDT I've setup a CentOS 6.3 x64 box and OpenJDK 1.6, and installed Oracle 11g2 and WebLogic 12c. I'm trying to deploy a WFS to the box, as exploded EAR. Consulted the Oracle hands on lab document, the 2nd lab here, I've tried to deploy the sdows.ear. But have got weird results. The deployment seems OK, console web pages reports no error, and shows all successful messages. However, all I see in the WFS URL are errors: When I check the back end server output I see: I've also tried to deploy the module directly using the sdows.ear file, but similar error occurs: What have I done wrong? |
In Job Step > Properties, only see two tabs: Package and Configuration Posted: 21 Jun 2013 04:34 PM PDT I am trying to set a job step that executes a package to use the 32-bit runtime, as it uses the SQL 10 driver against a SQL 2000 database. The job is running on SQL Server 2012. I see all kinds of examples of how to do this, where the job step properties page has 9 tabs (Set Values, Verification, Command Line, General, Configurations, Command Files, Data Sources, Execution Options and Logging). Execution Options has a checkbox for 32-bit mode. When I look at my job step, logged in to the server as an Admin, running SSMS as administrator, all I see are two tabs: |
Update existing SQL Server 2008 table via Flat File using SSIS Posted: 21 Jun 2013 01:50 PM PDT I am trying to update existing records or add any new ones to a SQL Server table using a flat file. Also I need it work as an SSIS package. I don't get an option to do a source query like I do for Excel, etc. when setting up the import task and even when I was trying to use Excel my options seemed limited to simple selects for the source query. I really need to be able to run a full script that checks the first column (which is my key), updates existing records, then adds any new ones. Maybe I am taking the wrong approach here altogether, but I thought there was a way to do this with SSIS. Can someone point me in the right direction? If it were two tables and I didn't have to use SSIS I would just write the sql and be done with it. It's really the bridge with using flat files and SSIS (automating the import) that I'm looking for here. Just to give a quick background of what I ultimately am trying to accomplish: We do exports of our data into flat files and those will go to a ftp server. We then have the customer import from those files into table representations of them so they can report against them. |
How to gracefully handle a source database restore from backup in a data warehouse Posted: 21 Jun 2013 01:13 PM PDT We are facing a challenging situation with our data warehouse source databases. Frequently these sources databases will be restored from backups. There is a high turnover of data entry persons who use the system and they make many mistakes. So the business will just restore from a backup and start over. But at this point, the data in the data warehouse will have already been processed and needs to be corrected. There could be thousands of rows of fact data which is no longer valid. Is there an appropriate design pattern to handle this scenario? For example, would you need a way to rebuild the data warehouse from scratch? Would you attempt to restore a backup of the data warehouse and then build ETLs to synchronize? Would you delete data from your fact tables and then re-insert? |
Mitigating MySQL replication delay in single master/multiple slave setup Posted: 21 Jun 2013 01:10 PM PDT For a particular web application that requires high availability I am thinking of scaling out by setting-up MySQL replication with one "write" master and multiple "read" slaves (as explained here http://dev.mysql.com/doc/refman/5.1/en/replication-solutions-scaleout.html). The only issue that I have is how to mitigate the replication delay between the master and the slaves? For example, let's say that the user posts an item (writing to the master), and very quickly thereafter wants to view his posted items (reading from one of the slaves). What efficient solution can I put in place to make sure that the a read operation from one of the slaves will always have all the content of any previously completed write operation to the master? |
My MySQL server is charged? [closed] Posted: 21 Jun 2013 12:26 PM PDT What factors can help to know if a server mysql is charged or not ? It's about the number of users connected, CPU, ..?? Thanks. |
How to calculate sigma > 1 in MySQL Posted: 21 Jun 2013 09:47 AM PDT I have a big set of data and calculating AVG and STD already. But as STD is only 1 sigma and 31% of all data points are outside of avg +/- std, I want to know, if there is any good way to STD with sigma > 1. Sigma 2 or 3 would be enough. Thanks |
Changing the representation of NULLs in pg_dump plaintext output Posted: 21 Jun 2013 06:44 PM PDT I'm trying to convert a large-ish Postgres DB (500 GB) to SQL Server 2012. After investigating a few third-party tools and being disappointed in features, performance, or both, I started pursuing a simple pg_dump/bulk import solution. Things looked promising until I realized that pg_dump represents NULLs in plaintext as "\N", which causes the bulk insert to vomit in cases of type mismatch. Even if I were to automate the pg_dump process to produce a single file per table, some of the individual tables involved are very large (20-50 GB) and performing a comprehensive find-replace using even fast file editing options in linux, or a Perl script, add too much overhead to the time required for the import/export. I'm hoping there's a way to modify the NULL representation in the pg_dump output that I'm not aware of, or failing that, to get some recommendations for alternative approaches to this process in terms of tools or strategies. Thanks in advance for your help. |
Posted: 21 Jun 2013 03:56 PM PDT According to this Microsoft document: http://msdn.microsoft.com/en-us/library/ms174205.aspx I am supposed to be able to use ctrl+K then ctrl+D to format my SQL documents in SQL Server Management Studio 2012 but when I use that combo I get the error:
I am trying to make modifications to an existing SQL document that has no formatting to it at all which makes it extremely difficult to read. Does anyone know how to make the Format Document command available so I can have SQL format this code for me? |
Query for master in Postgres replication Posted: 21 Jun 2013 12:40 PM PDT I'm trying to find out if there is a way to query for the master, from a slave PostgreSQL server that has been set up with server replication. From a slave, I can: And this will give me a 't' result if I'm on a slave and an 'f' result on the master, which is step one. Next, I'd like to run a query that gives me some information about the master that it's replicating. Preferably an IP address or hostname. For the record, I can query the master with: And this will give me information about slaves. I am hoping that there is a reciprocal method for querying a slave. Is this possible? If so, how? |
Replication Options with data subsets Posted: 21 Jun 2013 09:59 AM PDT We have an application that makes of of a SQL Server 2012 ( The plan is to create a replication database on the same server as ServerA, called Additionally, the data in the tables could be reduced. For example, say we have a Person table, which has a Can this be achieved with replication? Can we get subsets of data from a subset of tables from the source, into our new replicated database? Would it be something like creating a VIEW on the source, and replicating the results of that view, as a table in the replicated database? |
Issues starting Postgres server [closed] Posted: 21 Jun 2013 09:59 AM PDT I'm trying to go through the book "7 Databases in 7 Weeks" and I'm completely stuck at starting a Postgres server. My current issue is that when I run I have no idea how to go about fixing this. I'm running PostgreSQL 9.2.4 on Mountain Lion 10.8.3. |
Copy a SQL Server database from one server to another on demand Posted: 21 Jun 2013 08:29 PM PDT I have two servers, Prod and Test, both running SQL Server 2008 RTM. Is it possible to have a PowerShell- or VBScript that could copy a database from one server to another? The copy should of course include all contents of the database, with the exception of privileges so I don't lock myself out from editing the destination. The destination database should be cleared/reset or completely overwritten, so both source and destination are identical. Additionally, the connection to the source should be read-only and (if possible) only able to initiate the copy process without actually having access to the data. I am slightly familiar with PowerShell, so if this only means connecting and starting a task it should be doable. Or do I have to look for advanced solutions? Thank you. |
Migration to databases Domain Users do not have access Posted: 21 Jun 2013 12:34 PM PDT I migrated databases to new servers, however the applications that were previously used with the databases are failing to load. I have changed the connections and etc. The jobs also seem to be failing. I have a domain account who is the job owner. However, when I try to execute the job under my User name i get the following error: Executed as user: NT AUTHORITY\SYSTEM. Login failed for user.....[SQLSTATE 28000) (Error 18456). Is this related to Domain Users not having appropriate read and write access to the database. Also how would I give All domain users permissions to execute stored procedures. |
Creating a global temp table in MySQL Posted: 21 Jun 2013 10:34 AM PDT I am working on a MySQL stored procedure. I need to create a temp table and I want to access this temp table whenever I execute this stored procedure. I know we can't access the temp table in another request in MySQL. Is there a way to create a temp table like globally, or how can I access the temp table across the multiple requests? |
MongoDB problems recovering a member of the replica set Posted: 21 Jun 2013 07:35 PM PDT I have a sharded database with 2 replica sets (RS1 and RS2) each one of the RSs with 2 servers. I had a problem yesterday with one member of the RS2, the mongod instance crashed throwing an error. After that I tried to recover the member making it sync with the other member of the replica set (it took a long time to finish the sync) and then I'm getting the same error again: Any idea of why this may be happening? How can I make this server sync and work? My last surviving server is now running as secondary, is there a way to make it primary for a while to get the data out of it? Thanks in advance! |
Posted: 21 Jun 2013 03:35 PM PDT Help! I managed to crash MySQL last night. I am on a Mac using the native version that came with Mountain Lion. I was upgrading from 5.5 to 5.6. I have followed instructions in this forum to delete the installation, but trying to re-install 5.5 says that there is a newer version and won't install. Trying to install 5.6 fails. I found this error in the console: Help me please ?? I am stuck and in a world of hurt and despair. |
SQL Server replication conflicts after migration from 2000 to 2008 Posted: 21 Jun 2013 06:35 PM PDT I got a suggestion over at Stackoverflow to post here....greatful for any and all help. Please bear with me I think this might take a while to explain. For many years now my company has hosted a solution for a client involving a complex web application with smaller mobile solution consisting of IIS 6 for the web app, SQL Server 2000 on its own server and Visual Studio 2005 Pocket PC app replicating with SQL Server via Merge Replication. This whole time the mobile solution has been very solid and did not require many updates so we have replicated with We recently migrated this entire solution as follow:
The new web box received the 64 bit version of SQL Server Compact 3.5 tools and we now call The basic idea of the entire process is that mobile field users get assigned safety inspections to perform on buildings. When a facility in the system needs an inspection an inspection record is created via the web app in the DB. A status flag is set such that the HOST_NAME() is utilized to make sure only records for a given inspector with this particular status will let them show up on their mobile device. The user can synch multiple times in the field sending their data up to the SQL Server/web app and receive more inspections down or other updates such as look up table data...typical merge replication here and has been working great for years. Once the field user changes the status of the inspection, it will travel from mobile device to SQL Server database and be removed from their iPaq. The inspection has additional work flow on the web app from here on out. Now on to the problem. We migrated everything publishing the exact same subset of tables with the same joins/filters. Same settings on the publication as far as I can tell are the same. However; when a user gets a new inspection down to the hand held for the very first time, enters data, then synchronizes back to the database every row has a conflict. Since we have default conflict resolution the publisher wins and the data collected in the field it lost. The inspection now looks blank just as it did when it first came down to the mobile device. If the user syncs again with or without changes on the mobile (subscriber) all is well. Any future changes from the mobile device are intact. It is as if the web/db data is newer then the hand held data. I am 100% sure it is not. I have looked at table triggers, web app logic, etc. We were very careful not to include any application changes to DB/web app/mobile app with respect to data manipulation during this migration. Here is a summary of the order of operation: New row created in the database >> Mobile user receives data >> mobile user updates data >> synchronizes - data is lost. Conflicts show up for all data lost. From here on out any additional mobile changes are captured. Merge replication works in both directions flawlessly. Thanks for taking the time to read please help. I am stuck after 3 days. |
How to connect to a Database made by Oracle SQL Database? Posted: 21 Jun 2013 05:35 PM PDT So I am fairly new at this, so if you could keep that in mind in your answers, it would be much appreciated. I installed Oracle SQL Database on my Windows PC. It came in two zip files. I installed it and the online portion of it works fine. I can login with Username: sys and Password: **. What I am trying to do is connect to this newly created database on another computer through SQL Developer. I have read that in order to do this, you need to change the hostname of the Database from "localhost" to an IP Address. How do I do that and is there anything else I need to do to make this work? I also found this LINK. Is this something I should do? I do not have a Domain though. listener.ora tnsnames.ora |
Backup plan for MySQL NDB cluster databse not innoDB Posted: 21 Jun 2013 01:34 PM PDT I have a Database which will grow more than 250GB all data is in NDB engine(2 datanodes) and no other mysql engine used for data store.
Kind regards, |
Easiest Way to Move 100 Databases Posted: 21 Jun 2013 09:10 PM PDT I need to move about 150 databases from one server to another server.
I was planning on moving them one at a time using RedGate Packager, however this will take a while. Is there a faster and easier way? |
How to optimize a log process in MySQL? Posted: 21 Jun 2013 02:34 PM PDT In my project, I have about 100.000 users and can't control their behavior. Now, what I would like to do is log their activity in a certain task. Every activity, is one record which includes columns like user_id and some tag_id's. The problem I have, is that these tasks in some cases can go up to 1.000.000 per year per user. So if I would store all these activities in one table. that would obviously become very big (=slow). What is best to do here? Create a single table per user (so I have 100.000 log tables) or put all these activities in one table? And what kind of engine should I use? One important thing to note: Although i simplified the situation a bit the following doesn't look normal, but users can also change values in these tables (like tag_id's). |
MySQL: Lock wait timeout exceeded Posted: 21 Jun 2013 08:35 PM PDT I have a developer that has been trying to alter a large table (~60M rows). Via LiquidBase/JDBC they're running Today while it was running I checked in on it periodically; everything looked normal, the query was running, in state "copying to tmp table", I could see the temp table on disk getting larger and larger (still plenty of free space). I could see other processes blocked waiting for this table to be unlocked. Finally after about 4.5 hours, they got the "lock wait timeout exceeded; try restarting transaction" error. This is actually the 2nd time they've tried, and it seems to fail right about when I would expect it to complete. innodb_lock_wait_timeout is set to the default 50, I can't imagine it would run for so long to be affected by this. No errors logged, no deadlocks or other weirdness seen in 'show engine innodb status'. Can anyone help me with other ideas? I'm fairly stumped on this one. thanks |
postgresql: how do I dump and restore roles for a cluster? Posted: 21 Jun 2013 12:14 PM PDT Where are roles stored in a cluster, and how do I dump them? I did a pg_dump of a db and then loaded it into a different cluster, but I get a lot of these errors: So apparently the dump of my db does not include roles. I tried dumping the 'postgres' db, but I don't see the roles there either. Do I need to use Postgresql versions 8.4.8 and 9.1.4 OS: Ubuntu 11.04 Natty |
Converting RTF in a text column to plain text in bulk Posted: 21 Jun 2013 05:29 PM PDT I have a legacy system with about 10 million rows in a table. In that table there is a column of type My current method is I have a C# program that loads the query in to a DataTable using a This is working great for small tables, however this is the first time I had to run it on a table with such a large data-set (some of the rtf files can be several megabytes in size with embedded pictures) and I am getting OutOfMemory errors with my C# program. I know I can chunk my query down in to a smaller batches, but I wanted to see if there is a better way that I was missing to strip off RTF formatting. Should I just do the same thing as my current solution but only query out data smaller chunks at a time, or is there a better way to do this? |
Designing a database for a site that stores content from multiple services? Posted: 21 Jun 2013 04:35 PM PDT I'm building a site that implements David Allen's Getting Things Done that pulls in your email, Facebook newsfeed, tweets from those you follow on Twitter, and more services are planned. The problem is that I'm not a DBA, and I'm not sure how to design the database so that as I add features to the site, I won't have to artificially corrupt people's raw data for the purposes of storing it (for example, I want to add the ability to get RSS feeds sometime in the future, but I'm not sure how I'd do that without making a mess). I've put down my initial ideas using DBDesigner 4, below, you'll find the diagram and the SQL. A few notes to help clarify clarify things.
Can someone please point me in the right direction? I'd also be willing to look at using a NoSQL database if suggested. Thank you for your time and consideration. Here's the SQL create script just in case anyone wants to see it. |
How to free up disk space? which logs/directories to clean? [ORACLE 11g] Posted: 21 Jun 2013 11:34 AM PDT I want to free up the disk space on my Linux machine. I've drill down the space usage and found that the following directories have a big size Can I delete contents from these directories? Note: I mean contents and not directories Thanks! |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment