[how to] mongodb replication node stuck at “STARTUP2” with optimeDate as 1970 |
- mongodb replication node stuck at “STARTUP2” with optimeDate as 1970
- How to convert Postgres from 32 bit to 64 bit
- storing csv files
- Update primary SQL Server in mirrored environment using transaction logs from old environment
- Why do my WAL logs start with 000000010000000000000004 instead of 000000010000000000000001?
- Running functions in parallel
- Make sure slave has finished reading from master
- Create table group sub-folders in Management Studio
- SQL Import/Export defaults to byte stream on export
- MySQL bin log missing data?
- Azure compatible client tools
- MYSQL matching one column to another in a different table
- SQL Server grant permissions to STANDBY database
- Transaction Log Maintanance While Using AlwaysOn Availability Group
- trouble creating a database with postgreql
- Is this a good strategy for importing a large amount of data and decomposing as an ETL?
- Dropping Hypothetical Indexes
- SA permissions issues with many nested objects
- General tool to load dump files
- Optimizing bulk update performance in Postgresql
- High Mysql Load , over 700% CPU
- How do I execute an Oracle SQL script without sqlplus hanging on me?
- Second time query execution using different constants makes faster?
- SSRS appears to be ignoring Permissions set using Report Manager
- postgis problem with shortest distance calculation
- How do I find my current SCN?
- Meaning of 'SET' in error message 'Null value is eliminated by an aggregate or other SET operation'
- Can you give me one example of Business Intelligence?
mongodb replication node stuck at “STARTUP2” with optimeDate as 1970 Posted: 20 Jun 2013 08:36 PM PDT i have just setup replica sets with three nodes . the third node is stuck at stateStr STARTUP2 with "optimeDate" : ISODate("1970-01-01T00:00:00Z"). However its showing no error message. Is this alright. On primary rs.status() yeilds also db.printSlaveReplicationInfo() on yields Is this alright. Also how can i test my replication especially the third node |
How to convert Postgres from 32 bit to 64 bit Posted: 20 Jun 2013 09:06 PM PDT I would like to convert from PG 32 bit to 64 bit. I am testing with With Until now, I haven't found how to change from a 32 bit install to 64 bit. How can this be done? |
Posted: 20 Jun 2013 06:51 PM PDT Does anyone have any recommendations on how I am handling the storage of csv files? Anyone can upload any number of columns, differing column names, and the amount of data doesn't matter. I am validating the data with an application csv parser. Right now, the data does need to be queried for searching. I am storing the data in a EAV table, so it would have a column in the database that holds the column name from the CSV and the data for that row. If the csv file has 10 columns and 10 rows, the database would have 100 rows, so it can get large fairly quickly. Lastly, I am able to efficiently query the data due to the application building out the query on the fly, it gathers the distinct column names and then uses max if to return the column names with the data, even if there is null data present. |
Update primary SQL Server in mirrored environment using transaction logs from old environment Posted: 20 Jun 2013 08:00 PM PDT I'm currently migrating a large (~40gb) database to a new environment. The new environment is a HA/mirrored set up with primary, mirror and witness nodes. I'm backing up the database in the old environment, restoring to primary & mirror, and turning on mirroring. During this time the old environment is still active, and data is changing. When we are ready to go live with the new environment I plan to take another transaction log from the old environment and restore that to the new primary server. Is that possible? Will this be successfully synchronised to the mirror? |
Why do my WAL logs start with 000000010000000000000004 instead of 000000010000000000000001? Posted: 20 Jun 2013 02:38 PM PDT I could swear that when I first created my cluster a week ago, my logs started with Note, I'm using Postgres 9.2.4 on Ubuntu 12.04.2 (via Pitt's ppa) |
Posted: 20 Jun 2013 04:42 PM PDT This is for SQL Server 2012. We have some import processes for FTP files that are picked up and read into a staging table, from there we massage/check the data before moving into production. One of the areas that are causing some issues is dates, some are valid, some are typos, some are just plain gibberish. I have the following example table(s): I also have a destination table(say in production) I insert the following into the RawData table: I wrote a function We're running into some performance problems with functions. I'm wondering if/how they work in parallel. I'm assuming here that the function But I could be wrong, is it running serial over each column in each row as it inserts? CreateDate() code: |
Make sure slave has finished reading from master Posted: 20 Jun 2013 04:19 PM PDT I am planning to have a master-slave system in which occasionally the master and slave will reverse roles. To do this, I will stop further updates to master, and redirect traffic to the slave. But, before I make the slave the new master, I need to make sure the slave has finished reading all the updates from the old master. How can I do that? Is running "flush logs" on the master sufficient? An solution that can be readily scripted would be preferred. |
Create table group sub-folders in Management Studio Posted: 20 Jun 2013 12:36 PM PDT When I was exploring the I am looking to organize tables and stored procedures into project specific folders. Under the new setup - when I am referring to my table object I would have to use the following syntax (I am guessing here): Also, apart from clearing up the clutter, do anybody foresee any performance improvement/degradation because of this re-organization? I use |
SQL Import/Export defaults to byte stream on export Posted: 20 Jun 2013 08:21 PM PDT So, I've done some research and I can't seem to figure this out. I'm not sure if it's some setting that I'm missing or what. For the basic info, running SQL2012, 64 bit, all that good stuff. I noticed that, for some reason, when I export from a table into a flat file using a query, the data type is defaulting to bytestream. In the past, it always defaulted to DT_STR and went through without any issues. Here's the query: Here's the error I'm getting: Here's what the export is showing when I select "Edit Mappings..." Now, this can be easily fixed by simply selecting "DT_STR" under the Mappings dialog but I frequently export using this method so I'd like to find out why it's doing it and fix it so I don't always have to go into the Edit Mappings dialog. Is it something to do with the query and the use of the EDIT: The data in both tables is stored as varchar(50) |
Posted: 20 Jun 2013 10:34 AM PDT I'm trying to make heads and tails of my binary logs and am coming up short. I have many entries similar to the following from mysqlbinlog but I'm missing log data that I know should be there. It's puzzling because I get expected SQL in the mysqlbinlog output for statements executed in phpmyadmin but those coming from other PHP-based remote web servers appear to not be recorded. My settings bin logging are: Am I missing a logging option? Mysql 5.0.95 / CentOS 5.9 |
Posted: 20 Jun 2013 01:53 PM PDT I'm building a DB for someone as a favor and whilst I'm perfectly OK to create a DB in Azure SQL for them and do the requisite TSQL coding they require I do also need to give them a client based way to access the data that involves no non-SQL coding from me. Ideally this would be a light weight alternative to SSMS that is compatibile with Azure. That way I can give them a series of scripts and paramtised SPs to run. Can someone recommend something that works please? Thanks for reading |
MYSQL matching one column to another in a different table Posted: 20 Jun 2013 11:08 AM PDT I current have two different tables. The first table has a list of titles and IDs associated to these titles, the second table is a list of random heading. What I would like to know is if there is a way to match up all the titles in table2 to the closest matching title in table1 is this possible? Ive tried :
But that did not work. I know I could use this query as each title is being put in table2 with PHP but I already have a lot of titles in the database. Any help would be amazing thanks |
SQL Server grant permissions to STANDBY database Posted: 20 Jun 2013 12:25 PM PDT So, first: the setup. We have SQL Server 2012 (ServerA) running in domain A. We have SQL Server 2012 (ServerB) running in domain B, separate subnet, no trusts. These are completely separate domains for security reasons and they cannot be joined/trusted/etc. We need to be able to query the data directly from domain B via Windows Authenticated logins. I was able to use this guide to set up transaction log shipping to get the databases from ServerA to ServerB (summary: create the transaction log shipping config, use WinSCP to copy the logs to the remote server, manually create the secondary using SQL script). So now we have the two databases running in STANDBY/read-only on ServerB. Now, the problem: we cannot grant access to these databases because they are in read-only so we cannot modify the permissions. How can I grant read-only access to these databases (either at the server level or DB level) to a domain group from DomainB on ServerB? I've found several references to creating a SQL login on the sending side, but I can't find any way to replicate it with a Windows Auth Login. |
Transaction Log Maintanance While Using AlwaysOn Availability Group Posted: 20 Jun 2013 10:45 AM PDT We are using HADR ( AlwaysOn Availability Group) feature of SQL Server 2012. Server and AG Configuration as below:
DBTest Database is growing (200GB) Day to day approximate monthly and same Transaction Log File will also grow according to Data. So How to minimize Transaction Log File Size by using proper way of taking LOG backup. On which Replica we have to take log backup. Thanks In Advance. |
trouble creating a database with postgreql Posted: 20 Jun 2013 04:36 PM PDT I just installed Postgresql on my Windows laptop. I am now trying to create my first database. I launched the Windows PowerShell and tried the following: From what I can gather, the password that you give PostgreSQL during installation is for a different user type? After some poking around on the internet, I modified my pg_hba.conf file by appending the following: Now I get the following error message: Not sure what I am doing wrong. Any suggestions? Well, this is interesting. I went back to the pg_hab.conf file and added a newline after that line of code I added earlier. I got a new error message: When I installed postgreSQL on my laptop, I set the port to 5432. Looks like postgreSQL is expecting the server to be running. Going to look into this... |
Is this a good strategy for importing a large amount of data and decomposing as an ETL? Posted: 20 Jun 2013 02:15 PM PDT I have a set of five tables (a highly decomposed schema for an ETL if I understand the nomenclature) that I'm going to load via bulk import, then run some inserts from those five tables into a SLEW of other tables, including inserts that just rely on the values I just inserted into the first tables. I can do my inserts as an A, B, C process, where I insert into the first table, then insert into some table S where exists in A + T (being some other table that has preloaded "configuration data"), then inserting into Z where exists in B + U, etc. Should I be trying to batch those inserts with a cursor (I know, stone the traitor) or should I just run the raw Should I stage out the inserts as:
OR should I insert into all the tales where the data is needed but do it via cursor, in a "for loop" style pattern of 100k rows at a time. FWIW, this is a behavior I saw from the DBAs at my last job, so I figure that's "what I'm supposed to do" (the batch process via cursors) but maybe I don't understand enough about what they were doing (they were also live-loading into systems that already had data, and were loading new data afterwards). Also bear in mind that I'm normally a C# dev, but I've got the most TSQL experience here and I'm trying to make the best process I can for raw-loading this data as opposed to our "current" method that is mostly webservice fetches and NHibernate save-commits. Things I think are important to the question:
|
Posted: 20 Jun 2013 10:33 AM PDT In the past I thought I'd deleted hypothetical indexes using either a DROP INDEX statement for clustered indexes and DROP STATISTICS statement for non-clustered indexes. I have a database that is full of DTA remnants that I would like to cleanup; however, when I try to drop the object I always receive an error telling me that I cannot drop the object "because it does not exist or you do not have permission". I am a full sysadmin on the server so would expect to have rights to do anything. I've tried this with both DROP STATS and DROP INDEX statements but both give me the same error. Has anyone deleted these before and is there a trick I'm missing? Addendum Poking around in this, I just noticed that if I R-Click on the object, both the 'Script As' and 'DELETE' options are greyed out. |
SA permissions issues with many nested objects Posted: 20 Jun 2013 04:42 PM PDT I have a broker application that's relatively complicated. Today, after I made some changes, I started getting the error:
The whole scenario up to the point of the error is: (In Database ABC)
The check in the trigger I believe is what is causing the issue. If I run the update manually, it works fine. I have also used Other relevant facts:
Is there some sort of strange scoping happening because all this is running in the context of broker? Updates Some more info:
|
General tool to load dump files Posted: 20 Jun 2013 03:33 PM PDT I am a big fan of Postgres both for its price but also for its features. I am going to have to need to upload into it both Oracle dump and SQL Server files. I will try to ask and beg for plain .csv for schema DDL but I suspect that I will be given dmp files. Is there a tool, most preferably open source one, that would allow me to read, profile and possibly load Oracle/SQL Server files into Postgres? Thank you, Edmon |
Optimizing bulk update performance in Postgresql Posted: 20 Jun 2013 02:55 PM PDT Using PG 9.1 on Ubuntu 12.04. It currently takes up to 24h for us to run a large set of UPDATE statements on a database, which are of the form: (We're just overwriting fields of objects identified by ID.) The values come from an external data source (not already in the DB in a table). The tables have handfuls of indices each and no foreign key constraints. No COMMIT is made till the end. It takes 2h to import a Short of producing a custom program that somehow reconstructs a dataset for Postgresql to re-import, is there anything we can do to bring the bulk UPDATE performance closer to that of the import? (This is an area that we believe log-structured merge trees handle well, but we're wondering if there's anything we can do within Postgresql.) Some ideas:
Basically there's a bunch of things to try and we're not sure what the most effective are or if we're overlooking other things. We'll be spending the next few days experimenting, but we thought we'd ask here as well. I do have concurrent load on the table but it's read-only. Thanks. |
High Mysql Load , over 700% CPU Posted: 20 Jun 2013 02:33 PM PDT I had high mysql load on server linux 64 bit , 24 G.B ram , Intel(R) Core(TM) i7 CPU 950 @ 3.07GHz , Alot of quiers in sending data mode Here is mysql status and here is /etc/my.cnf i tried to optimize table and adjust my.cnf with mysqlreport still the same i don't use InnoDB mysql version mysql> SHOW CREATE TABLE friends\G mysql> SHOW CREATE TABLE likes\G mysql> SHOW CREATE TABLE facesessions\G mysql> SELECT SUM(index_length) FROM information_schema.tables WHERE engine='MyISAM'; |
How do I execute an Oracle SQL script without sqlplus hanging on me? Posted: 20 Jun 2013 01:33 PM PDT For an automated task I would very much like to run some SQL scripts and make sure that
How can i do this with Oracle (and |
Second time query execution using different constants makes faster? Posted: 20 Jun 2013 11:33 AM PDT Can someone explain or direct me how execution on indexes happen with different constants at intervals in Mysql. I notice only for the first execution on the table it takes time, after that with different constants it executes the query very quickly. I would like to know how to execute the query in such a way that it should take same amount of time every time it executes with different constants, is there a way to set some parameter off / on? Query executed time : 9 mins. Query executed time : 18 Secs. |
SSRS appears to be ignoring Permissions set using Report Manager Posted: 20 Jun 2013 05:33 PM PDT I have setup SSRS on SQL Server 2008 in native mode. As an administrator I can login to report manager, upload reports and run them, and also use the Web Service URL to generate reports. I have also created a local user on the machine, I went into Report Manager as Admin, and at the top level set permissions that should assign the local user to all roles. When I login to the machine as that user, and then navigate to Report Manager I just get the heading for the page, but do not see any of the folders that are configured. I've checked and the folders are set to inherit parent permissions and they are showing the newly created local user in there too. It seems odd that I have set the permissions, yet SSRS is still not showing what I should be able to see. Is there another step I need to take other than configuring the permissions in Report Manager? When logged in as the newly created local user: |
postgis problem with shortest distance calculation Posted: 20 Jun 2013 12:33 PM PDT while working with POSTGIS pgrouting, for calculateing the distance between two roads(lines) i got the shortest_path function. But the logic is based on Start_point(Start_id) and end_point(end_id) but in my data the linestring contains so many internal points like ('linestring(1 1,2 2,3 3,4 4,5 5)' just for example..) it is taking start point (1 1) endpoint(5 5) if other line starting with (5 5) it is showing as route...like ('linestring(5 5,6 6)') But line which crossing the point inside the linestring like(2 2,3 3,4 4) which is not telling as connected.. example table roads: id name way 1 A linestring(1 1,2 2,3 3,4 4,5 5) 2 B linestring(5 5,6 6) 3 c linestring(2 1,2 2,2 3) if i am applying shortest_path function from point(1 1) to (6 6) its showing the way but for (1 1) to (2 3) it is not showing anything...but there is a route for this (1 1,2 2,2 3) can anyone please help me out for finding the solution.. Regards Deepak M |
Posted: 20 Jun 2013 01:07 PM PDT Given any version of Oracle:
|
Meaning of 'SET' in error message 'Null value is eliminated by an aggregate or other SET operation' Posted: 20 Jun 2013 10:55 AM PDT I saw the above 'ANSI warning' message today when running a colleague's script (and I don't know which of the many statements caused the warning to be shown). In the past I've ignored it: I avoid nulls myself and so anything that would eliminate them is a good thing in my book! However, today the word 'SET' literally shouted out at me and I realised I don't know what the meaning of the word is supposed to be in this context. My first thought, based on the fact it is upper case, is that it is referring to the According to the SQL Server Help, the 'ANSI warnings' feature is based on ISO/ANSI SQL-92, the spec for which makes just one use of the term 'Set operation' in a subsection title, hence in title case, in the data assignment section. However, after a quick Googling of the error message I see examples that are My second thought, based on the wording of the SQL Server warning, was that the mathematical meaning of set is implied. However, I don't think that aggregation in SQL is strictly speaking a set operation. Even if the SQL Server team consider it to be a set operation, what is the purpose of putting the word 'set' in capitals? While Googling I noticed a SQL Server error message: The same words 'SET operation' in the same case here can only refer to the assignment of the Can anyone shed any light on the matter? |
Can you give me one example of Business Intelligence? Posted: 20 Jun 2013 06:11 PM PDT I don't really understand what Business Intelligence is all about. If I start from having a corporate DB, what is it that a BI person would do? I found plenty of material on the web, but it usually is a bit too complex. I want a simple example that would make me understand what BI is all about and what would a BI person produce that is of value to my organization. |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
The most effective method to Solve MySQL 1241 Error Message through MySQL Technical Support
ReplyDeleteIn the event that you are attempting to pull the information from a table between two dates however until and unless you get a blunder message which insinuates that "Database Error: Problem executing question". Well! In the event that you have any plan to take care of this issue then it’s great yet of not then don't stress, we are here to raise you hell free. We will tell you the Cognegic's MySQL Remote Support or MySQL Remote Service Support who is master in unraveling MySQL related issues. Through and through our specialized specialists tackle your inquiries and give you propel bolster.
For More Info: https://cognegicsystems.com/
Contact Number: 1-800-450-8670
Email Address- info@cognegicsystems.com
Company’s Address- 507 Copper Square Drive Bethel Connecticut (USA) 06801