Sunday, August 4, 2013

[how to] sync geographical branches localhosts with a VPS server

[how to] sync geographical branches localhosts with a VPS server


sync geographical branches localhosts with a VPS server

Posted: 04 Aug 2013 09:28 PM PDT

We have 43 different geographical branches within the country.We are creating a Web App which we can use to deal with our university students registration,marks,payments etc.each 43 branches don't want to connect with each branches but they should connect with the central sever.And if any branch have internet interrupts we are planning to use localhost to connect to the web app.So when network is ok that localhost data should sync with the central sever.Does it possible with mysql clustering ?

How oracle syncs data across nodes in a cluster

Posted: 04 Aug 2013 05:06 PM PDT

If I have a oracle db server clustered across 2 nodes, it means both nodes have a copy of the whole data, the data is not lying half and half on each node.

Now when I make an update using my application connected to the cluster, (application deployed on WAS), i make an update on one of the nodes and then Oracle will sync it on the other node. My question is, will this sync happen in real time? Is it something configurable? It seems if it is in real time, then the performance will be affected?

What is the standard solution for that?

My MySQL server time is not the same as my Server

Posted: 04 Aug 2013 03:51 PM PDT

So a month ago I had purchased a server from a hosting company.

The info are;

Linux Centos (latest version).

I installed ZPanel which installs phpMyAdmin, MySQL and all the other good stuff. Anyway, I have managed to change my SERVER time to my local time which is GMT. However when I try to set it on my MySQL server it shows the right time, but when I use the

now()  

function in PHP, the time stores +1 hour ahead of my current time, which pretty much messes up my system.

Does anyone know how to fix this? Please It's been a few weeks, yet nothing to fix it :/

Thank you.

How can I connect to a database after its disk has come back online?

Posted: 04 Aug 2013 11:26 AM PDT

I have problems with database after reconnecting a hard disk:

  1. .Net application works with database.

  2. Disconnect hard disk with physical files.

  3. Make any request to the database (of course we will have an exception, Unable to open the physical file).

  4. Reconnect hard disk.

  5. Any query to the database in the future will still throw an exception, Unable to open the physical file.

How can I recover connection with physical files without restarting the SQL Server service?

How do I view a Microsoft SQL Server Compact Database

Posted: 04 Aug 2013 04:44 PM PDT

I have installed SQL Server 2012 Enterprise Edition.

I can use SQL Server Data Tools to add databases and tables and query tables etc.

Next, I used Visual Studio to try out a tutorial on creating a website with a database. The database type created was an SQL Server Compact Local Database.

This database doesn't turn up in the list of databases in SQL Server Data Tools. How do I view compact local databases? (I want to be able to view the data in the DB, add rows using a GUI, and run SQL queries within the tool.)

Or are they different products, in which case does Microsoft have an equivalent tool to SQL Server Data Tools for SQL Server Compact databases?

Delete operation mysql cluster

Posted: 04 Aug 2013 02:17 AM PDT

I am trying to delete some entries from table on a cluster and I get this error

ERROR 1297 (HY000): Got temporary error 233 'Out of operation records in transaction coordinator (increase MaxNoOfConcurrentOperations)' from NDBCLUSTER

I have increased MaxNoOfConcurrentOperations=1000000

but I still get this error

I never get this error when selecting or inserting.

Scaling of PostGIS vs MySQL cos/sin calculations

Posted: 04 Aug 2013 01:10 AM PDT

I need to sort database rows according to GPS coordinates (or any other geographical representation).

The way I've understood it, MySQL Spatial Indexes are too buggy at the time of writing (at least articles and my own tests indicate so).

So I can either switch to PostgreSQL to use PostGIS which seems to be working excellently, or I can do it mathematically in MySQL:

((acos(sin(latitude * pi() / 180) * sin($latitude * pi() / 180) + cos(latitude * pi() / 180) * cos($latitude * pi() / 180) * cos((longitude - $longitude) * pi() / 180)) * 180 / pi()) * 60 * 1.1515 * 1.609344) km  

I'm not asking for a PostgreSQL vs MySQL debate, but simply how the performance of the MySQL expression scales as rows grow compared to using PostGIS/PostgreSQL.

It's a drastic step to switch database system, so the upside better be significant.

Calculating the median value in a MySQL table with CodeIgniter

Posted: 04 Aug 2013 05:15 PM PDT

This is my model file in codeigniter. I need to calculate the median for this variable: budget_breakdown.breakdown_grants. How can I do this calculation?

function reports_details1() {          $this->db->select('budget_breakdown.breakdown_grants');      //survey_respondent_info.state,survey_respondent_info.survey_id,budgets.budget_id,          $this->db->from('survey_respondent_info');          $this->db->join('budgets',                          'budgets.survey_id=survey_respondent_info.survey_id' , 'left');          $this->db->join('budget_breakdown',                              'budgets.budget_id=budget_breakdown.budget_id' , 'left');          $this->db->where('budgets.budget_option_id', 2);          $query1 = $this->db->get();          $result = $query1->result();          return $result;      }  

Government border control database solution [closed]

Posted: 04 Aug 2013 01:39 AM PDT

I am on a research for border control IT solution. Do you recommend any big databases administration and security companies ?

I'd really appreciate it.

Creating a table with parent_id from a flat table

Posted: 04 Aug 2013 07:47 AM PDT

I have a flat table

(id, city, county, state),  

Example

(1,'Beckingen', 'Merzig-Wadern', 'Saarland')  

A city belongs to a county. A county belongs to a state.

First my attempt was to divide the data into 3 tables cities, counties, and states and to build up the associations via linking tables (cities_counties). Now I want to create a 'locations' table out of it, where county is parent of city and state is parent of county.

locations:   (id, name, parent_id, type)     (type is 'city','county', or 'state')  

What would be best practice to create the adjacent list from the flat list?

I'm working in a LAMP environment, just in case a php script would fit.

Optimizing a query with ORDER BY in a derived table

Posted: 04 Aug 2013 06:34 AM PDT

The query below takes too long to execute (58 seconds). If I run two more queries like this with different nid values, the second and third query take much more time to execute. How can I optimize it?

SELECT consoleId,         servertime,         servertime AS servertimeUNIX,         nocName,         eventIdx,         nocStatus  AS status,         nid,         site,         machine,         clientversion,         timeExecuted,         count(*)   AS eventCount  FROM   (SELECT consoleId,                 servertime,                 nocName,                 eventIdx,                 nocStatus,                 nid,                 site,                 machine,                 clientversion,                 timeExecuted          FROM   temptest          ORDER  BY tid DESC) AS x  WHERE  servertime >= 1367902800         AND servertime <= 1370581199         AND nid = 1124  GROUP  BY FROM_UNIXTIME(servertime, '%Y-%m-%d'),            site,            machine,            nid;  

The table definition is

CREATE TABLE `temptest`    (       `tid`                INT(10) NOT NULL AUTO_INCREMENT,       `eventIdx`           INT(11) NOT NULL DEFAULT '0',       `servertime`         INT(11) DEFAULT NULL,       `nocName`            VARCHAR(50) DEFAULT NULL,       `site`               VARCHAR(50) DEFAULT NULL,       `machine`            VARCHAR(64) DEFAULT NULL,       `clientversion`      VARCHAR(20) DEFAULT NULL,       `nid`                INT(10) DEFAULT NULL,       `nocStatus`          VARCHAR(250) DEFAULT NULL,       `consoleId`          INT(11) DEFAULT NULL,       `timeExecuted`       VARCHAR(100) DEFAULT NULL,       `machineManufacture` VARCHAR(100) DEFAULT NULL,       PRIMARY KEY (`tid`),       KEY `consoleindex` (`consoleId`),       KEY `nitification` (`servertime`, `nid`)    )  ENGINE=InnoDB  AUTO_INCREMENT=8229683  DEFAULT CHARSET=latin1   

Is there a way to find the least recently used tables in a schema?

Posted: 04 Aug 2013 07:13 AM PDT

Is there a way to find the least recently used tables in a MySQL schema? Besides going into data directories? I was hoping there was a metadata or status trick-- but Update_Time in STATUS and INFORMATION_SCHEMA is always NULL.

Cannot rebuild index, but there's no reason why not?

Posted: 04 Aug 2013 02:10 AM PDT

I've created a process whereby I am able to only rebuild indexes that need rebuilding(the process takes an hour and a half if I rebuild them all), and while it works beautifully, it gets stuck on one particular index and I see no reason why I should.

It fails with the following message:

Msg 2725, Level 16, State 2, Line 1

An online operation cannot be performed for index 'I_520CUSTVENDRELIDX' because the index contains column 'MEMO' of data type text, ntext, image, varchar(max), nvarchar(max), varbinary(max), xml, or large CLR type. For a non-clustered index, the column could be an include column of the index. For a clustered index, the column could be any column of the table. If DROP_EXISTING is used, the column could be part of a new or old index. The operation must be performed offline.

However, when I run the query based on a suggestion by this chap, shown below, I get no results:

SELECT *  FROM sys.index_columns AS ic  INNER JOIN sys.columns AS c  ON ic.object_id = c.object_id  AND ic.column_id = c.column_id  AND ((c.system_type_id IN (34,35,99,241)) -- image, text, ntext, xml   OR (c.system_type_id IN (167,231,165) -- varchar, nvarchar, varbinary       AND max_length = -1))  INNER JOIN sys.indexes as si  on si.object_id = ic.object_id  AND ic.index_id = si.index_id  inner join sys.tables t  on t.object_id = ic.object_id  where t.name = 'CONTACTPERSON'  and si.name = 'I_520CUSTVENDRELIDX'  

On top of that, a manual inspection of the index in question shows no text, ntext, image, xml or varchar(MAX), nvarchar(MAX) or varbinary(MAX). Could there be something I'm missing here?

For the record, this is a clustered index.

Programmatically find indexes that cannot be rebuilt online

Posted: 04 Aug 2013 10:32 AM PDT

I am automating rebuild and reorganise indexes using T-SQL. I run into problems with indexes that cannot be rebuilt online. Primarily this happens because ntext/nvarchar columns are included.

Is there a way to programmatically identify them as part of this statement? So kind of addition to the WHERE predicate

SELECT   --TOP 20  OBJECT_NAME(IPS.OBJECT_ID) AS [TableName], avg_fragmentation_in_percent, SI.name         [IndexName],    schema_name(ST.schema_id) AS [SchemaName], 0 AS IsProcessed   INTO #FramentedTableList   FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL , NULL) IPS   JOIN sys.tables ST WITH (nolock) ON IPS.OBJECT_ID = ST.OBJECT_ID   JOIN sys.indexes SI WITH (nolock) ON IPS.OBJECT_ID = SI.OBJECT_ID AND IPS.index_id =   SI.index_id   WHERE ST.is_ms_shipped = 0 AND SI.name IS NOT NULL   AND avg_fragmentation_in_percent >= CONVERT(DECIMAL,   @FragmentationThresholdForReorganizeTableLowerLimit)   ORDER BY avg_fragmentation_in_percent DESC   

MySQL information_schema doesn't update

Posted: 04 Aug 2013 08:55 PM PDT

I have a database, say abc, in mysql server. It has only one table named test. test uses innodb engine and I've set innodb_file_per_table to true.

After I run the query delete from abc.test, I want to calculate the database size of abc. Here is the query I use:

SELECT      table_schema "name",      sum( IF(engine = "MyISAM", data_length + index_length -  data_free,      data_length + index_length)) "size"  FROM information_schema.TABLES  where table_schema like "abc";  

The strange thing is that I find the database size doesn't decrease at all, however the data in "test" is gone.

I've done this kind of test many times, this strange behavior happens sometimes.

I'm using percona mysql server 5.5.29-rel29.4.

Can anybody tell me what is wrong?

Update:

Actually, I use another thread to check the database size periodically.

Install MariaDB without password prompt in ubuntu

Posted: 04 Aug 2013 12:53 AM PDT

I need to install mariadb server in ubuntu without passwod prompt. For this purpose, I execute the following commands, but it doesn't gave me any progress. It again shows password prompt.

I used the following commands :

  export DEBIAN_FRONTEND=noninteractive    echo mariadb-server-5.5 mariadb-server/root_password password mypass | debconf-set-selections    echo mariadb-server-5.5 mariadb-server/root_password_again password mypass | debconf-set-selections    sudo apt-get -y install mariadb-server  

I also check some links in stack overflow, but it doesnot worked:

http://stackoverflow.com/questions/8138636/install-mysql-on-ubuntu-natty-without-password-prompt-and-using-shell-variable-a

http://stackoverflow.com/questions/7739645/install-mysql-on-ubuntu-without-password-prompt

Please help me on this regard. Have any wrong with my code?

Configuration options for relational algebra evaluator "ra.jar"

Posted: 04 Aug 2013 03:53 AM PDT

I am participating in the Stanford Database Class, and there are some problem sets to be solved in "mathy" relational algebra syntax.

The course instructors demand from students to use this open-source, free jar file "ra.jar" to submit solutions via web forms.

RA: an interactive relational algebra interpreter  Version 2.1b by Jun Yang (junyang@cs.duke.edu)  >  

I wonder if there is a better evaluator available? This one is pretty good but

  • on the command-line-interpreter prompt, the cursor position is hard to see

  • the tool translates relational alegbra experssions to SQL and submits them to SQLite. It does not show (return to the user) the intermediate steps. There should be a -log or verbose option to return the SQL generated. (The ra tool has a -v option to return the submitted ra query).

I want the CREATE VIEW.... and SELECT ... SQL statements that the jar file generates and sends to SQLite. Alternatively, can I configure SQLite to echo/log all sql commands submitted to it? Is there an option for the .properties file?

Limit memory used for mongoDb

Posted: 04 Aug 2013 09:55 AM PDT

Is there any way to limit using RAM for mongodb on Debian? I'm looking for a solution fo about 8 hours, but have no results.

What's better/faster? MySql or FileSystem?

Posted: 04 Aug 2013 06:42 AM PDT

Let's imagine a web site that is a directory of people. For each person there may be a profile photo and a biography.

I'll admit my SQL queries could be better but in general what would be faster and less processor.

To check if a file exists and then open it or

check against MySql to see if a bio exists and display it.

I'm pretty sure in the above case the filesystem will smoke the mysql database.

What if I make the database a read only delimited txt file?

What's faster in this case?

Is there a certain point where if the txt file has too many records it's better to use MySql?

Pgpool, Postgresql and Apache tuning (1000 concurrent spatial queries)

Posted: 04 Aug 2013 08:55 AM PDT

I'm trying to configure a load balancing system. I've a python script, invoked through mod_wsgi on Apache, that generates a query and executes it on pgpool: request-> wsgi python -> pgpool -> postgresql. Pgpool is configured as load balancer using 4 servers with 24GB ram and 350GB ssh hd. Our db is about 150GB and a query takes about 2 seconds. These are the configurations:

Pgpool

  • num_init_children 500
  • max_pool 2
  • child_life_time 300 seconds

Apache (mpm_prefork)

  • StartServers 100
  • MinSpareServers 20
  • MaxSpareServers 45
  • ServerLimit 2000
  • MaxClients 100
  • MaxRequestsPerChild 1000

PostgreSQL

  • max_connections = 1000
  • shared_buffers = 6GB
  • work_mem = 4GB

It seems not working When I try to submit more than 150 concurrent queries, although pgpool log file doesn't have any errors I get this error from the python script:

OperationalError: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.

Any ideas?

No comments:

Post a Comment

Search This Blog