[how to] postgresql replication - pg_stat_replication is showing empty columns |
- postgresql replication - pg_stat_replication is showing empty columns
- How to allow SqlAgent Job to access all databases (e.g. sys.databases)
- Random Problem with Variable
- Help with Cybernetics SAN Configuration
- Should snapshot agent continue to run in transactional replication?
- Error: Unable to write inside TEMP environment variable path
- filtered index null and > 0 disparity -- need explanation
- SQL Server 2008 Convert String to Datetime question
- Syntax Error - Anyone Able to Help?
- How can I find out which tables use reference partitioning from the Oracle data dictionary?
- Allow self-referential foreign keys to be null from an alter-table
- how to join table from different database with different Users of database?
- help in creating a promotion/Offer table (database schema) for shopping cart site [on hold]
- Detabase Tuning Advisor about Indexing
- FOR loop oracle query with calulation
- Location of the mdf file of the database
- mongo replication not happening
- Filter on a window function without writing an outer SELECT statement
- Errors while creating multiple mysql-5.5 instances
- How do I identify the remote db agent name to use in create_database_destination on Oracle 11gR2?
- Loading XML documents to Oracle 11g DB with control file
- Enabling/disabling/changing Oracle auditing without a shutdown?
- MySQL - run SELECT statement on another server without defining the table structure
- consequences of using "innodb_flush_method = O_DIRECT" without having a battery backed write cache? or on a KVM guest?
- DB2 to require password each time
- oracle streams apply: how to get a reason why LCR message was not applied
- SQL Server 2008 Setup Error 0x80070490
- Replicating data from Oracle to MySQL
- MI Data Warehouse Advice
- How can I convert from Double Precision to Bigint with PostgreSQL?
postgresql replication - pg_stat_replication is showing empty columns Posted: 20 Sep 2013 08:16 PM PDT I've a postgresql 9.2 streaming replication setup. It appears that the slave is getting the updates from master and is in sync. I've verified it by looking at pg_xlog dir and process list. $ ps aux | grep 'postgres.*rec' postgres 26349 2.3 42.9 38814656 18604176 ? Ss Sep20 24:06 postgres: startup process recovering 000000010000026E00000073 postgres 26372 4.9 0.1 38959108 78880 ? Ss Sep20 51:27 postgres: wal receiver process streaming 26E/731E05F0 And the startup logs on the slave also look alright. 2013-09-21 03:02:38 UTC LOG: database system was shut down in recovery at 2013-09-21 03:02:32 UTC 2013-09-21 03:02:38 UTC LOG: incomplete startup packet 2013-09-21 03:02:38 UTC FATAL: the database system is starting up 2013-09-21 03:02:38 UTC LOG: entering standby mode 2013-09-21 03:02:38 UTC LOG: redo starts at 26E/71723BB8 2013-09-21 03:02:39 UTC FATAL: the database system is starting up 2013-09-21 03:02:39 UTC LOG: consistent recovery state reached at 26E/75059C90 2013-09-21 03:02:39 UTC LOG: invalid xlog switch record at 26E/75059E98 2013-09-21 03:02:39 UTC LOG: database system is ready to accept read only connections 2013-09-21 03:02:39 UTC LOG: streaming replication successfully connected to primary What worries me is that the archive=> select * from pg_stat_replication; pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | state | sent_location | write_location | flush_location | replay_location | sync_priority | sync_state -----+----------+------------+------------------+-------------+-----------------+-------------+---------------+-------+---------------+----------------+----------------+-----------------+---------------+------------ 999 | 16384 | replicator | walreceiver | | | | | | | | | | | (1 row) Is this the expected behavior? I remember seeing values for client_addr, sent_location, replay_location etc. when I did a test run some time back. Is there anything that I'm missing? |
How to allow SqlAgent Job to access all databases (e.g. sys.databases) Posted: 20 Sep 2013 05:09 PM PDT After creating a SQL Agent Job using SSMS with a single step that starts by calling: Instead of the job listing all the DBs it only lists 3:
From within the job it reports: Each of these exhibits the same behavior from within the job: The account 'Domain\DomainSqlAdminAccountName' has serverole 'sysadmin' and has a user associated with it for most user databases and master but not model, msdb, tempdb. The SQL Agent account shows it has server role 'sysadmin' but does not have users mapped to each for any database. The job has "Owner": 'Domain\DomainSqlAdminAccountName', also tried Agent account and self account. From the SSMS dialog "Edit Step" -> "General tab", "Run as" is blank (it offers no choices). But from the Advanced tab I am able to specify the 'Domain\DomainSqlAdminAccountName' account. Side experiment that seems related:
For 'SomeUserDatabase' the account has role membership's: db_owner, db_securityadmin, db_ddladmin, db_datawriter, db_datareader, db_backupoperator (and the associated login is sysadmin). How does one allow access to all databases via a SQL Agent Job? In the end this procedure will actually be calling into each of the (~30) DBs. |
Posted: 20 Sep 2013 06:43 PM PDT I managed to fix my old error so I'm going to update this post with the new error. Basically I have to manually add a structure in phpmyadmin with my users ID in order for it to update using this code. Because I set it to update on whatever column has the same as the users id. Any idea how I could do it differently so there is some sorta default automatically? I tried doing it so when you register it would do it and that works out well but I am going to have up to 10 different subjects so how would I do this in a more efficient way? Another thing I'm trying to figure out is if I can make it check the database for the value under 'lastpage' like I did but then check if it is equal to page2 and above than it will not update the values at all. Basically it will only update the values if thats your first time going to the page on your account. Get it? Anyone got any ideas?! |
Help with Cybernetics SAN Configuration Posted: 20 Sep 2013 05:37 PM PDT There are tons of articles out there recommending RAID/ LUN/ drive configurations for SQL Servers, but most leave me still questioning as they usually state "it depends" and for this reason I'd like to provide some specifics and get some more direct recommendations: I have a client who already installed SQL Server on a Cybernetics SAN and here are the specifics:
The SQL Server is more read intensive, with data bulk loaded nightly like a data warehouse and the web app used for looking up and reporting on the data. Periodically the client experiences sluggishness, sometimes severely, for short periods with the web application. I've also noticed Brent Ozar's Blitz script reveals slow writes to the E drive at times. I am wanting to suggest the client reconfigure their SAN as follows:
But at the very least I believe the client should do the following:
Suggestions? |
Should snapshot agent continue to run in transactional replication? Posted: 20 Sep 2013 01:46 PM PDT We have transactional replication running for a large number of publications on sql server 2008 R2 2 node active/active cluster. I have noticed that the snapshot agent job runs hourly and it looks like it does a refresh of the publications (literally, a new snapshot?). During this time we experience locking issues with the blocker being this job. Should this be doing a refresh this often if no new articles are being added or changed? |
Error: Unable to write inside TEMP environment variable path Posted: 20 Sep 2013 02:09 PM PDT I am installing I get the following, rather famous it seems, error:
This occurs immediately upon launching. I see the postgresql splash, then this error occurs. Everything I have found on the interwebs so far, i have tried. It includes:
Basically everything in the top 10 google hits. I am working with the 32 bit version, but the 64 bit version install fails with the same error. I'm able to install successfully on other machines with similar config. What else can I try? Install log file:
|
filtered index null and > 0 disparity -- need explanation Posted: 20 Sep 2013 05:56 PM PDT I'm getting behavior in a query plan I cannot explain. The difference is between two filtered indexes I'm testing with. One uses a Please note the following... if you execute these scripts and then compare the two select statements, you will probably get fifty fifty performance like I did. However in my production data the query that utilizes the Questions: Why the disparity? Where does the scan vs seek come from? Wouldn't Schema + Data: Two Queries: Update1 I copied production data into my test tables. Instead of fifty fifty, the results matched the included query plan and reproduced the disparity. The test scenario is structurally analogous. Update2 These query plans will not compile. Why? They force me to use |
SQL Server 2008 Convert String to Datetime question Posted: 20 Sep 2013 10:28 AM PDT I have a string column in a table that display data as I'm trying to convert this column to datetime, but I can't figure out how to extract only date and change the data type.
Conversion failed when converting date and/or time from character string. Thank you |
Syntax Error - Anyone Able to Help? Posted: 20 Sep 2013 09:23 AM PDT I'm very bad with checking Syntax and I confused myself here D: Anybody able to help? Function: Calling the image: Basically it will save the file uploaded into the database with the filename and the id of the user, it will then make a directory inside site_images named after the ID of the user and it will move the image to that folder. Then to show the image I made it check if the user never uploaded a photo, and if they haven't to show the default. If they have uplaoded a photo it will set the source of the images to "site_images/$id/$image" basically. This works perfectly if I put it like this. Any idea what I did wrong or whats wrong with my syntax? Edit: The image it shows up with is a link to just /15 and 15 is the ID of the user. It doesn't go in site_images but it does go in the users ID. It also doesn't go in the $image variable. Any ideas? |
How can I find out which tables use reference partitioning from the Oracle data dictionary? Posted: 20 Sep 2013 01:44 PM PDT I'm writing a generic drop-all-objects script for our Oracle databases. It generates The problem is that tables which are have reference-partitioned tables dependent on them can't be dropped in this way: How can I detect which tables are the parents of reference partitioned tables from the data dictionary, so I can skip them in the first loop, and drop them in a second loop? |
Allow self-referential foreign keys to be null from an alter-table Posted: 20 Sep 2013 08:10 AM PDT I am trying to establish a hierarchical relationship between rows in an existing table. There are two fields of interest: a not-nullable primary key field id, and a second nullable field parentId. Both these fields are pre-existing, and the normal case is that parentId will be null. This seems to be a normal use case, and in fact we have some old tables using the same pattern. I am trying to use fluentMigrator to add a foreign key constraint after the fact on these rows: It is failing with the error:
This is not a problem with illegal values, as I did a bulk update on the test system, and set all Thoughts? |
how to join table from different database with different Users of database? Posted: 20 Sep 2013 12:25 PM PDT I have Two database called. Database1: Table 1: Category Database2: Table 2: Product I am calling query like this; It will return error: Can anyone help me How can I resolve this issue. |
help in creating a promotion/Offer table (database schema) for shopping cart site [on hold] Posted: 20 Sep 2013 08:33 PM PDT I want to add promotions/ offer table to my shopping cart site that have other tables as product, category. product table as- -ProductId -Productname -CategoryId -UnitPrice -etc category table as- -CategoryId -CategoryName -ParentCategoryId -etc Offer may be of various types e.g. 'Discount of 20% on any book purchase', 'Discount of 20% on any mobile of samsung brand', 'Buy 2 get 1 free' etc. How can I design promotion table and then link it with product or category table. I want to show the List of products with offer on that product, but don't want to apply too many joins. What would be the effective way of doing this. |
Detabase Tuning Advisor about Indexing Posted: 20 Sep 2013 09:46 AM PDT How does DTA make an index recommendation? What is the technology behind that? If we can use same on our side then we can shorten the whole process of Tuning recommendation. Like (1) creating trace file using SQl Profiler (2) Use trace that trace file in DTA (3) Than look for recommendation (4) Follow the recommendation & finally see the results through Query Execution Plan. So i just wanted to know if it is there any way to create a predictable index. |
FOR loop oracle query with calulation Posted: 20 Sep 2013 11:37 AM PDT I have a list of distinct areas, and would like to track statistics for those districts in a SQL statement. I currently have a setup that runs some summary information, but it doesn't include the whole list. What I'd like to do is Select a distinct array of values from 1 table, then run SQL for each of those values against a few other tables and populate a third table. I've done a fair amount of Googling, and see that I can loop using "LOOP" or feed in the distinct values via a CURSOR. Here's my SQL I'm using now, but if there isn't a dispatchgroup that is in the current outages table, it doesn't show statistics for that dispatch group. When I try adding the Ideally what i'd like to happen is first I select the distinct dgroups from the first table, then loop through the SQL and output the statistics. Here's my stab at the psudocode. |
Location of the mdf file of the database Posted: 20 Sep 2013 12:37 PM PDT I have a database
My |
mongo replication not happening Posted: 20 Sep 2013 08:22 AM PDT I setup master/slave replication of mongodb on EC2. But I see not replication happening. When I do "show dbs" on master, it shows all dbs expected. But when I do the same on replica, it does not show me any db. Please help me troubleshoot. |
Filter on a window function without writing an outer SELECT statement Posted: 20 Sep 2013 05:22 PM PDT Since window functions cannot be included in the WHERE clause of the inner SELECT, is there another method that could be used to write this query without the outer SELECT statement? I'm using Oracle. Here is the sqlfiddle. |
Errors while creating multiple mysql-5.5 instances Posted: 20 Sep 2013 09:22 AM PDT I have installed 3rd mysql instance on my testing server. 2 instances already running without any issues. When I installed 3rd instance by mysql-5.5.30 zip source, it installed successfully but when I tried to restart 3rd instance of mysql it says,
1st instance running on 3305 2nd instance running on 3306 3rd instance running on 3307 Error Log is as follows. Still unable to figure out this error. How can I start the 3rd instance? InstallationHere is the story from beginning. I have installed mysql via source:
When I restart this instance it gives error of updating pid and exit. Which step is missing? My
|
How do I identify the remote db agent name to use in create_database_destination on Oracle 11gR2? Posted: 20 Sep 2013 08:22 PM PDT I am trying to setup DBMS_SCHEDULER in Oracle 11g to run a remote database job. I have a remote Oracle 11g R2 database on unix and a local one on Windows. I read that you can install the oracle scheduler agent from the 11g client install for machines that don't have Oracle installed but this is not needed for running remote jobs if Oracle is present on both machines. With the remote agent installation, you run schagent and provide parameters to register the agent to the remote machine but I cant find any instructions on the web regarding how to register remote agents when both machines have Oracle installed or what to use as the agent name in this case. I have added an entry to tnsnames.ora for the remote DB and can tnsping, etc. If I run the |
Loading XML documents to Oracle 11g DB with control file Posted: 20 Sep 2013 06:22 PM PDT I am using Oracle 11g XML database and trying to load XML documents to this DB with a control file and the I want to use the Oracle function Here is the date entry in XML file: And here is entire code the control file: I believe that I can execute the above control file with the The UPDATE: I successfully registered the schema, which contains definition for the date string, and 100 other schema, with a script. Since this script is very large, I am posting only 2 registration portions of it: The 2nd registration above is the last in the script, and this creates the table STXP, in which I am trying to load about 800 XML files. Each XML file has a root element called stxp. This is the relevant definition of date string: And this is how I am using the above definition: When I make the above element optional (for testing purpose) and remove the date string entry (mentioned near the top of this question) from my XML file, the XML file is loaded successfully to Oracle XML database. When I put this entry back to XML file (because it is required), Oracle rejects it. Because I let Oracle take care of population of STXP table with data from XML files, I am not sure if I can set a trigger to pre-process the date string from the input XML file before saving it in database. i think there is a way to do it in the control file. |
Enabling/disabling/changing Oracle auditing without a shutdown? Posted: 20 Sep 2013 01:22 PM PDT I have a large database that needs auditing on a very detailed level (every select, update, insert, and delete, along with the actual text of the statement) for about half the users. I know how to do this (here is a related question for anyone interested), but I also realize we cannot do this for any extended amount of time because of how much quickly we would be collective massive amounts of data. So while there is a scheduled downtime coming up that we can implement the auditing, to change it to fine tune it (as management changes the request of what data they desire) or to disable it once we have enough data would require us having to take the database down to disable this. While this wouldn't be too horrible to do if we were able to schedule a short downtime late at night, it would be really nice if this could be avoided altogether, but every reference I've seen so far requires the database to be brought down and back up. So, my question (which I believe to be general enough for the purposes of this site, even though the back story is specific) is if there is a way to enable/disable/change auditing without shutting down the database. Edit: Oracle version 11r2. As for AUD$ vs. FGA, I'm not sure what FGA is, but AUD is the table which will hold the data, so I am assuming that one. |
MySQL - run SELECT statement on another server without defining the table structure Posted: 20 Sep 2013 04:22 PM PDT In MySQL I can query information on another server using federated tables, as long as I've defined the same table structure locally. In MS SQL Server, however, I can run any SQL statement against a linked server. Is it possible to do the same thing in MySQL? |
Posted: 20 Sep 2013 07:22 PM PDT Mysql 5.5.29 Innodb- 128GB Ram - 32 cores - Raid 10 SSD. Our server which is a dedicated KVM guest on a 'baremetal' is hosting our heavy read-write DB server. Everything is file-per-table. innodb_Buffer_pool is 96GB with 1GBx2 log_file_size with about 20 minutes of writes to fill up those logs at peak time. How bad of a situation would it be if O_DIRECT (currently running on the default) was enabled during a high work load without a battery backed write cache and a total crash were to occur on the OS, parent host or the power was cut? Does a battery backed write cache make a difference if the server is a vm guest of the parent anyway? . |
DB2 to require password each time Posted: 20 Sep 2013 11:22 AM PDT I am using db2inst1 to connect to a database in DB2 which I have installed on my machine. Therefore, db2inst1 user does not require username/password authentication (borrows them from the OS). I would like to change that, and force every time a connection is initiated a username/password to be requested. More specifically, this is how the authentication configuration looks like:
I have played with some authentication combinations for "AUTHENTICATION" and "TRUST_CLNTAUTH" without much luck. |
oracle streams apply: how to get a reason why LCR message was not applied Posted: 20 Sep 2013 12:22 PM PDT I've set up bidirectional oracle streams replication (11gR1) using identical scripts on both machines (DB1 and DB2). Although changes from DB1 are being applied to DB2, changes from DB2 to DB1 aren't. I have only one rule for capture processes that checks for apply tag to prevent cyclic propagation, and have no rules for apply processes. LCRs from DB2 are dequeued at DB1 by apply reader process (update LCRs are among dequeued messages for sure, because when I issue 50 inserts at DB2, at DB1 dequeued messages counter increases by 50), but aren't processed by apply coordinator and servers : As far as I understand, in that case LCRs can be silently ignored (without throwing apply error) only if SCN of LCR is lesser that instantiation SCN for a table, but instantiation SCN is 1114348 (< 1118751): Oracle provides means to deal with errors, but how to check why message was not applied if there was no error? |
SQL Server 2008 Setup Error 0x80070490 Posted: 20 Sep 2013 02:22 PM PDT I am trying to install SQL Server 2008 x64 on Windows 2008 R2 and keep getting the following error:
I have applied all required patches and there are no other instances of SQL Server on the machine. Any clues as to what the cause might be? Thanks. |
Replicating data from Oracle to MySQL Posted: 20 Sep 2013 03:22 PM PDT I work with a vendor that does data analytics, and they currently receive a replication stream from some of our databases using a product called Goldengate (which is very expensive). Goldengate has been great - it replicates transactions from the Tandem-NSK source and can apply the changes into any supported database - they're using MySQL at the remote end. We're switching our billing system to Oracle, and while we could continue to use Goldengate to move these logs, I'd like to see if there's another option. We initially chose Goldengate because nothing else could get data out of the Tandem NSK, but now that we're moving to Oracle, there may be some more native (or at least simpler) choices. I've got nothing against them - like I said, it works great - but I'm hoping that two mainstrem databases are easier to do replication between than the NSK. Are there any products of methods that would help get transactional data from an Oracle system into an MySQL database? I'm not sure if there's any way to do this kind of replication natively (I know we can do Oracle -> MSSQL using native replication, but not any way to target MySQL that I'm aware of), or if anybody knows of a product that could facilitate this (and costs less than Goldengate). Thanks for any suggestions! |
Posted: 20 Sep 2013 10:22 AM PDT I have recently started a new job and part of my remit is to try to rescue the Management Information (MI) Data Warehouse. I use the term Data Warehouse very loosely here! The server setup is:
The disks split in to 3 drives:
These are the observations I have made regarding the database:
Importing data The data is imported using batch files and OSQL. It is slow, clunky and prone to failure (It has failed 4 times and I have only been there for 2 and half weeks) The logging is also poor. So apart from all that, it is perfect... I need to find a way to fight my way out of this mess but I am not sure how to go about it. Ideally, I want to be able to:
The main issue at the moment is the performance. I have created a new filegroup on drive D: (where the log files are stored) and placed a few non clustered indexes on there. I am being slightly cautious as I don't want to increase the import times as these are taking too long as it is! I wanted to partition the larger tables but partitioning is not included in Standard, it is an Enterprise feature. I realise that this is a pretty huge task and I am not looking for a magic fix but a little guidance on how to attack this would be a great help. EDIT: I should also point out that there is no test or dev environment for this either... |
How can I convert from Double Precision to Bigint with PostgreSQL? Posted: 20 Sep 2013 10:32 AM PDT I need to convert a value of Double Precision to Bigint with PostgreSQL. How can I do that? I have tried with |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment