Thursday, September 26, 2013

[how to] How to create login and logout trigger for client audit?

[how to] How to create login and logout trigger for client audit?


How to create login and logout trigger for client audit?

Posted: 26 Sep 2013 09:30 PM PDT

I need a table to put login and logout information for client not sysadmin,then i can use date to statistic user login/off ration...

Can anyone offer me a simple method to create login and logout trigger for audit specific client not the sysadmin,

Thanks in advance.

Best way to display number of overspent projects

Posted: 26 Sep 2013 07:46 PM PDT

I have the first view displayed as:

CREATE VIEW Projects AS      SELECT DepartmentName,           COUNT(*)NumberOfProjects,          OfficeNumber,          Phone      FROM   DEPARTMENT AS D JOIN PROJECT AS P              ON D.DepartmentName = P.Department        GROUP BY Department;  

Which gives:

| DepartmentName | NumberOfProjects | OfficeNumber | Phone        |    | Accounting     |                1 | BLDG01-100   | 360-285-8300 |  | Finance        |                2 | BLDG01-140   | 360-285-8400 |  | Marketing      |                2 | BLDG02-200   | 360-287-8700 |  

However, I want to display a "NumberOfOverBudgetProjects" for each department in this view. I created a view which lists which projects are going over their hourly budget:

CREATE VIEW Budgets AS      SELECT Department,           MaxHours AS ProjectMaxHours, SUM(HoursWorked) AS TotalHoursWorked,          SUM(HoursWorked) - MaxHours AS Balance      FROM   PROJECT AS P JOIN ASSIGNMENT AS A          ON P.ProjectID = A.ProjectID      GROUP BY P.ProjectID  

;

Which gives:

+------------+-----------------+------------------+---------+  | Department | ProjectMaxHours | TotalHoursWorked | Balance |  +------------+-----------------+------------------+---------+  | Marketing  |          135.00 |           160.00 |   25.00 |  | Finance    |          120.00 |            85.00 |  -35.00 |  | Accounting |          145.00 |           130.00 |  -15.00 |  | Marketing  |          150.00 |           165.00 |   15.00 |  | Finance    |          140.00 |            52.50 |  -87.50 |  +------------+-----------------+------------------+---------+  

Marketing department should list 2 projects being over budget and Accounting and Finance should both list 0.

Any ideas how I can incorporate a column of "OverBudgetProjects" in the original view? Or by creating a new view?

How to drop a column named 'column'?

Posted: 26 Sep 2013 05:41 PM PDT

I need to delete a column named "column" form a table but the standard:

mysql> ALTER TABLE table_name DROP COLUMN 'column';  

returns:

ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''column'' at line 1  

I am thinking "column" is reserved word and shouldn't be used. But how do I get around to deleting it ?

Deny access to one database, allow all the rest

Posted: 26 Sep 2013 07:49 PM PDT

How can I deny access to specific databases from a login? I know that I can walk through and make the login a db_owner on all the ones I want the login to have access to, but how do I do the opposite? I want the login to have access to everything but the ones that I specify

MYSQL - Unable to change 'old_passwords' variable

Posted: 26 Sep 2013 05:53 PM PDT

I have full access to a 'MYSQL 5.0.27-community-nt-log' database one of our new updates for a core app requires PHP 5.4, after installing and configuring I'm stuck at the error regarding old vs. new passwords.

I've followed every thread I can find and I'm pretty sure I understand the process:

  1. Disable 'old_passwords'
  2. Reset the password to the new 41 encryption

But I cannot seem to get 'old_passwords' to be set to 'OFF'. Logging in as root I've tried the following:

  • I've removed it from the my.cnf file (I've also tried setting it to '0', 'False' etc)
  • Manually setting it in the query browser both globally and session
  • Turning the option on/off in MYSQL administrator

and restarted the service but 'show variables like 'old_passwords' always reports as 'ON'.

Changing the password still remains at a 16 length.

I'm sure once I'm able to turn this variable off I should be fine, Any advice?

How to interpretate and modify a dBase (.dbf) file?

Posted: 26 Sep 2013 03:48 PM PDT

I'm working with dBASE (.dbf) files for a project using a GIS software because it seems that they are more efficient. I've converted several .xml file to .dbf with OpenOffice 3.4 and loaded into the GIS however in some columns, the fields that appeared with different numbers in Open Office, turn to a series of one (1) in the GIS attribute table. This is an example of my first row of one element:

Na,N,6,2

I think that I can affect it changing the first row of my .dbf file.

First question:

I would like to know what does it means every character. I know that the first (e.g. Na) is the name of the field, N is probably the type? Number? I've no idea about the third and the forth (red and blue arrows).

Second question:

Why if I write "Na" when I load the file it become "NA", with capitalized letters? .dbf open with OpenOffice

Thanks a lot!

Can I speed up this slow query which should be limited by MIN/MAX values?

Posted: 26 Sep 2013 07:21 PM PDT

The intention here is to look at the user retention over a period of time.

My current solution employs a temporary table, since I do this over several ranges.

CREATE TEMPORARY TABLE user_retention_temp (first DATE, last DATE, amount INT);    INSERT INTO user_retention_temp  (SELECT MIN(ymd) AS first, MAX(ymd) AS last, player_id FROM day_item GROUP BY player_id);  

Here I would have liked to limit this query to only match player_id's that has a first/last that's in a specific period of time.

Instead I'm force to do that for the consequent query:

SELECT LEAST(last - first, 7) AS diff, count(*) AS amount   FROM user_retention_temp  WHERE first >= ? AND last < ? GROUP BY diff  

This is pretty awful since I'm filling up the temporary table with lots of data I don't need.

Is there a faster way? Assume at least 2-3 million entries in day_item and that everything we need is indexed.

Xml stored as varbinary

Posted: 26 Sep 2013 09:22 PM PDT

I'm wondering if I have to store a XML information inside a column (just storing, not for querying ) (SQL SERVER) which Datatype is better to use XML or Varbinary? I did an example:

DECLARE @ClientList XML  SET @ClientList =  '<?xml version="1.0" encoding="UTF-8"?>  <!-- A list of current clients -->  <People>  <Person id="1234">  <FirstName>Juan</FirstName>  <LastName>Perez</LastName>  </Person>  <Person id="98765">  <FirstName>Alfredo</FirstName>  <LastName>Domiguez</LastName>  </Person>  </People>'    Declare @ClientListBinary Varbinary(max)  set @ClientListBinary = CONVERT (varbinary(max),@clientlist,1)  print datalength(@ClientList)  print datalength(@ClientListBinary)  

and the result is that Varbinary uses more space... any thoughts on that

How do I find out how far along my PostgreSQL query is?

Posted: 26 Sep 2013 02:45 PM PDT

I have a pretty decent idea of how many rows my SELECT...INTO query will actually process (e.g. I know how many will materialize).

I understand Postgres won't tell me percentage completeness, is there a way (buried deep in logs, system tables, or otherwise) that I can find out how many rows have been pumped into the destination table or have been read by the SELECT query?

Why would one use the Innodb Plugin instead of the builtin Innodb in Mysql 5.1? (GPL)

Posted: 26 Sep 2013 04:44 PM PDT

Innodb works in Mysql 5.1 whether or not the innodb plugin, ha_innodb_plugin.so is loaded at startup or not. Configuration dependent of course.

What are the advantages and disadvantages of either using or not using the builtin or modular innodb features?

Thanks very much for any explanation :)

MongoDB cloneCollection between CentOS servers times out at 2 hours

Posted: 26 Sep 2013 12:46 PM PDT

I'm trying to clone a MongoDB collection from CentOS server A to CentOS server B. Both are running 2.4.6. The copy works great until exaxctly 2 hours, at which point I get a 9001 socket exception: "errno:10054 An existing connection was forcibly closed by the remote host."

I have tried to set the tcp_keepalive_time to 3600, down from 7200 (2 hours) on both servers with the hope that my keep alive probes will tell routers to keep my connection alive after the 1 hour mark.

No dice. Even with a shorter keepalive time (and 70 second interval for keepalive probes), my connection dies at 2 hours.

Does anyone know how to set up my servers and/or my mongodb configuration to allow my tcp connection to be active past 2 hours?

Thanks!

Convert SQL Server query to MS Access syntax

Posted: 26 Sep 2013 12:40 PM PDT

I have this query in SQL Server:

SELECT Dated, AccountID, AccountName, Rate,     case (when (Credit-Debit) > 0 then Credit-Debit END) as Credit,    case (when (Credit-Debit) < 0 then Credit-Debit END) as Debit,    Balance  FROM TempTrallBalance  

How can I convert it to Access syntax?

Oracle query is slower the second time

Posted: 26 Sep 2013 01:20 PM PDT

I have a general question about something I'm seeing in performance tuning in Oracle. I have a query that I'm testing. It's too big and complex for me to provide a simple example, sorry! (I tried lowering it down, but lost the symptom)

The first time I run the query, it's 2 seconds. Subsequent times it's 3, 4, or even 5 seconds. This happens consistently - if I change the spacing, I get 2 seconds again, then it goes back up.

This is the opposite of what I would expect - after the first time, the database doesn't need to parse the query, and blocks should be in read into the buffer cache. The only thing I can think of is that it is writing temp data somewhere, in such a way that it is faster to write and read it than it is to just read it. But that of course makes no sense.

Does anyone know why this is happening? Or any red flags in the execution plan that would cause something like this?

FYI, this is a SELECT, with multiple joins, aggregation, and views. No internal PL/SQL.

Thank you

Should I be concerned about excessive Com_change_db commands?

Posted: 26 Sep 2013 04:38 PM PDT

I installed MONyog recently to check for problems on a MySQL server that I admin. Most of the information was useful and helpful, but there's one warning I am not sure is worth fixing.

It reports that Com_change_db commands are excessive (currently 9+ million or 58 per second), or unusually high. The connections to the database are strictly Windows .NET applications and they use Connector/Net as an API.

I typically connect such that I specify which database to use when creating a MySqlConnection object. I discovered in a test that doing this causes the Com_change_db to increment by one when the connection is established.

I can "fix" it by not specifying a database, and instead qualifying the database within the query every time. (e.g. db.table.col instead of table.col)

Is it worth going through the entire code base of the application to make this change? Is MONyog concerned with something that doesn't matter?

db2ckbkp - How to determine what tablespaces are temp-tablespace

Posted: 26 Sep 2013 11:55 AM PDT

I am writing a script to restore databases from backup. The script should need as little input as possible to fulfill this task (in an efficient way). The restore command takes the parallelism parameter. The idea is to set this parameter to the number of tablespaces that are not temporary tablespaces.

I found the db2ckbkp command which will not only verifies the backup file, but also outputs lots of (useful) information.

Currently I was planning to run it with -t to get the tablespace information. I just have troubles to interpret the information printed. following the output that is printed for one of the tablespaces.

        USERSPACE1                tbspInImage: T                           ID: 2                      flags: 3122                     flags2: 400                extent_size: 32              prefetch_size: 32                    version: 9                    flavour: 6                      state: 0        statechangeobjectid: 0          statechangepoolid: 0                    LifeLSN: 00000000000005C2             LoadPendingLSN: 0000000000000000            LoadRecoveryLSN: 0000000000000000                   BeginLSN: 0000000000000000                     EndLSN: 0000000000000000                 StordefLSN: 0000000000000000            Full Backup LSN: 0000000000000000            Last Backup LSN: 0000000000000000           Full Backup Time:  00000000 = "19691231180000"           Last Backup Time:  00000000 = "19691231180000"                 TotalPages: 8192               UseablePages: 8160                reorgPoolID: 0                 reorgObjID: 0             poolReorgCount: 0            # of containers: 1              current_group: 0                  cont_csum: 2004299914        current_map_entries: 1                  page_size: 4096                   map_csum: 4294967294       tsp rfwd encountered: 16                Container CB                               Type: 6                         TotalPages: 8192                        UsablePages: 8160                 # of OS rsvd bytes: 512                      Page 0 offset: 131072                         Tag offset: 512                      Extent offset: 0                               Name: /data/devinsth/NODE0000/.../T0000002/C0000000.LRG  

To determine whether it is an temp space or not, the flavour is not suitable, since it is 6 for all of the tablespaces. However the container type seems to be different for temp spaces. Type is 0 for temporary tablespaces and 6 for all other tablespaces.

When I use the -a parameter, I get another piece of information.

    00003A91: SYSCATSPACE              ID:          0  Type:                 2 Datatype:           0              Size:    29932  NumContainers:        1 tbspInImage:        1        00003AC5: TEMPSPACE1              ID:          1  Type:                 1 Datatype:          16              Size:        1  NumContainers:        1 tbspInImage:        1        00003AF9: USERSPACE1              ID:          2  Type:                 2 Datatype:          32              Size:       96  NumContainers:        1 tbspInImage:        1        00003B2D: TS_DAT_SPF_4K              ID:          3  Type:                 2 Datatype:          32              Size:     7776  NumContainers:        1 tbspInImage:        1        00003B61: TS_IND_SPF_4K              ID:          4  Type:                 2 Datatype:          32              Size:       96  NumContainers:        1 tbspInImage:        1        00003B95: TS_LOB_SPF_4K              ID:          5  Type:                 2 Datatype:          32              Size:       96  NumContainers:        1 tbspInImage:        1        00003BC9: TS_TMP_4K              ID:          6  Type:                 1 Datatype:          16              Size:        1  NumContainers:        1 tbspInImage:        1        00003BFD: TS_USR_TMP_BAT_4K              ID:          7  Type:                 1 Datatype:          64              Size:        1  NumContainers:        1 tbspInImage:        1  

Can I use the type from this table? It seems to be 1 for temporary tablespaces and 2 for all other tablespaces. I tried to find documentation on the internet that explained the output of db2ckbkp but wasn't successful. Any help in this matter is appreciated.

Find rows where any column matches a search pattern

Posted: 26 Sep 2013 02:49 PM PDT

I have a table "Test" with 15 different columns of different data types.

I'm looking for rows having characters like "ABC". But "ABC" can be present in any column and I want to find whole records having "ABC".

I cannot restrict the row using a WHERE clause, because I really do not have any idea in which column the ABC resides? Is there a way to do this?

SQL Server 2005 get execution plan of a overnight stored procedure

Posted: 26 Sep 2013 03:49 PM PDT

There is a overnight job runs long time. I have used profiler to get the trace and identified a statement (in a stored procedure) that takes most of execution time. Now I need to see the execution plan of the statement so I can see what I can do. But I can't rerun the query since it uses a temp table created in the proc. And the logic of the proc is rather complex and very hard to reproduce (involving getting data from file system and data exchanging with Oracle database via linked server). I am not saying reproducing is not possible but would be rather time consuming. So is there a way I can see the execution plan of the proc or the statement in particular?

Attribute and Value tables on SQL

Posted: 26 Sep 2013 07:21 PM PDT

I'm not a SQL expert and I'd like to ask for the best practices on the following schema.

On my current project, we're using Attribute and AttributeValue tables quite often. I'm thinking this might not be a good practice since it involves joins when we need that data.

Example. Given the Person table, there's 2 more tables that basically contain "extra" data about person; those are PersonAttribute and PersonAttributeValue. I don't have a strong argument to prevent this practice on my team, but I'd like to hear what SQL experts have to say.

Thanks

How to speed up query on table with millions of rows

Posted: 26 Sep 2013 12:02 PM PDT

The Issue:

I'm working on a big table that consists about 37mln rows. Data include measurements of many devices made in certain time e.g. '2013-09-24 10:45:50'. Each day all of those devices are sending many measurements in different intervals on different times. I want to make a query which selects all the most actual ( 'actual' I mean the latest from all measurements made in each day) measurement of each day for 2 months e.g from 2013-01-01 to 2013-02-01.

The problem is that this query takes so much time to go, despite all of the indexes i've made on different columns. I've also created auxiliary table that contains max(MeterDate) and MeasurementsId when the measurement was given. I've noticed that index can't be made on MeterDate because it contains date and time which is not useful for making an index on it. So i converted the MeterDate -> CONVERT(date, MeterDate). I though that after joining The auxiliary table with [dbo].[Measurements] the query would be faster but still query takes more than 12s which is too long for me.

The structure of table:

Create table [dbo].[Measurements]    [Id] [int] IDENTITY(1,1) NOT NULL,  [ReadType_Id] [int] NOT NULL,  [Device_Id] [int] NULL,  [DeviceInterface] [tinyint] NULL,  [MeterDate] [datetime] NULL,  [MeasureValue] [decimal](18, 3) NULL  

Every row of Measurements table include measurement value on direct MeterDate e.g. "2008-04-04 13:28:44.473"

Direct select structure:

DECLARE @startdate datetime= '2013-07-01';   DECLARE @enddate datetime = '2013-08-01';    SELECT *  FROM [dbo].[Measurements]   WHERE [MeterDate] BETWEEN @startdate and @enddate   

Does anyone knows how to rebuilt table or add new or add indexes on which column that speed up query a bit ? Thanks in advance for any info.

Edit:

The table that I used was created by this query

with t1 as  (      Select  [Device_Id], [DeviceInterface],  CONVERT(date,  MeterDate) as OnlyDate, Max(MeterDate) as MaxMeterDate      FROM [dbo].[Measurements]       GROUP BY [Device_Id], [DeviceInterface], CONVERT(date,  MeterDate)  )  Select t1.[Device_Id], t1.[DeviceInterface],t1.[OnlyDate], r.Id    INTO [dbo].[MaxDatesMeasurements]  FROM t1  JOIN [dbo].[Measurements] as r ON r.Device_Id = t1.Device_Id AND r.DeviceInterface = t1.DeviceInterface AND r.MeterDate = t1.MaxMeterDate  

Then I wanted to join the new created table [dbo].[MaxDatesMeasurements] with old [dbo].[Measurements] and select direct rows

DECLARE @startdate datetime= '2013-07-01';   DECLARE @enddate datetime = '2013-08-01';       Select *  From [dbo].[MaxDatesMeasurements] as t1   Join [dbo].[Measurements] as t2 on t1.[Id] = t2.[Id]   WHERE t1.[OnlyDate] BETWEEN @startdate AND @enddate  

How can I integrate a new system to a design scheme I previously created?

Posted: 26 Sep 2013 08:18 PM PDT

This is a follow-up to a question I wrote a while ago, for reference, you can look it up here.

I've developed a curriculum based database scheme that looks like this(Thanks Joel Brown) enter image description here

Now that a new educational system is introduced(while the old one is still not "deprecated") I have been looking for a way to integrate both curriculum systems together without creating a whole new different system; As the two systems are fundamentally different, this have been quite a challenge for me.

Here's a picture describing the new system . . .

enter image description here

(English level 1, 2, and 3 are classified as courses)

See, the new system removes the whole grades system. A jr. high school student can enroll in the exact same course(English level 3, for example) as a sr. high school student.

Is there a way to make those two systems work with my current design scheme? Or should I create a whole new scheme for the new system?

What's a good way to model user authorization to hierarchical data?

Posted: 26 Sep 2013 08:23 PM PDT

I'm wanting to implement something that's basically an ACL in SQL Server (2008, we'll be moving to 2012). We're working with farm production information. We will have users who need to be authorized to view production records, but only at certain vertical levels in a hierarchy.

Our data hierarchy looks like this:

System  - Farm    - Group      - Animal  

The idea is that some users will have access at the System level, and can see records for all Farms, Groups, and Animals within that System. Likewise, some users will have permission starting at the Farm level, and need access only linked to that Farm and all Groups (and animals) within it.

Each table contains a primary key column, and a foreign key column linking it to the parent record (along with whatever other attributes each entity requires).

What I've implemented in the past is two-table system for linking users to the appropriate items they're allowed to see. Implemented here, it would look like this:

Table:  Authorizations          Table:  FullAuthorizations  Columns:    Id (PK)             Columns:    Id (PK)              UserId                          UserId              ObjectId                        SystemId              ObjectType                      FarmId                                              GroupId                                              AnimalId  

The application inserts a record into Authorizations, with the user to authorize, the record id (System id, Farm id, etc), and the type of record (System, Farm, etc). The FullAuthorizations table is used to denormalize the farm hierarchy for easier/faster filtering of data. A trigger is used on the Authorizations table (and each of the farm, etc, tables) to update FullAuthorizations. I considered using a View here, in a previous project with additional levels of entities, and the performance was quite poor once we began getting several hundred thousand records.

The queries would look something like:

SELECT *  FROM dbo.Animals a  WHERE EXISTS (      SELECT 1      FROM dbo.FullAuthorizations fa      WHERE fa.UserId = 1 AND fa.AnimalId = a.Id  )  

In the other project where we're doing this, the solution is performant, but feels like a bit of a hack, and I especially don't like that we can't maintain referential integrity on Authorizations with the associated objects. I'd appreciate feedback on some other possible solutions. I've been looking at things like Nested Sets, but not sure something like that fits this particular problem.

Oracle shared memory exception ORA-04031

Posted: 26 Sep 2013 02:23 PM PDT

I'm trying to establish an Oracle database connection. It throws the following exception:

ORA-04031: unable to allocate 4048 bytes of shared memory     ("shared pool","unknown object","sga heap(1,0)","kglsim heap")  

I have tried connecting the database through JDBC as well as SQL Developer, however it throws the exception in both case.

How can I fix this?

Memcached plugin on MariaDB?

Posted: 26 Sep 2013 04:23 PM PDT

I'd like to try new NoSQL feature in MySQL 5.6 but I am using MariaDB 10.0.2 on Debian 6 and don't fancy coming back to MySQL.

I'm wondering whether the memcached plugin has been added to MariaDB? And if not whether one can still use it as an addon?

And if not, can I use the existing Cassandra plugin of MariaDB to the same effect?

Failed copy job deletes all users

Posted: 26 Sep 2013 09:23 PM PDT

Since the progression of this was apparently somewhat difficult to follow the first time around:

I attempted a completely boring, been-done-a-thousand-times-before copy of a database using the copy database wizard with the detach/reattach method.

The copy failed. The log indicates that it was unable to execute a CREATE VIEW action for a particular view, because the datasource for the view did not exist. This is interesting in its own right, as the source most certainly exists, and the view(s) in question are fully functional in the source database. I'm not really clear, just yet, on how significant this is, as I've yet to figure out precisely why this generated an error.

This resulted in the deletion of all non-system user associations from the source database, leaving me with users dbo, information_schema, sys, and guest. Non-system roles were also deleted. Schemas were unaffected.

I have since restored the damaged database from backup. Academically, however, I would like to know the following:

  1. Why would a failed copy operation strip the user associations from the source database?
  2. Is there any sort of maintenance/rebuild/repair operation that could be performed on the source database to repair it?
  3. The loss of the users was immediately obvious, but given the rather mysterious nature of a failed copy job apparently damaging the source database, how concerned should I be about less obvious effects? In theory, I would expect restoring from backup would alleviate this concern, but do I have any cause to worry about, e.g., the master database?

This is entirely repeatable. I've made a handful of copies (manually) for the sake of experimenting with this particular issue, and in each case, the failed copy job obliterates the users and roles from the source database.

Removing the views that generated errors allows the copy to complete, and, as one would expect, produces a copy with identical data, users, etc., in addition to leaving the source database unaltered.

If it's important, I've tried rebuilding the indexes of the system databases, as well as the damaged database, to no appreciable effect.

The error generated:

1:00:25 PM,5/28/2013 1:00:25 PM,0,0x,ERROR : errorCode=-1073548784 description=Executing the query "CREATE VIEW [Sourcing].[PermittedArrProducts]  AS  SELECT     dbo.tblArrProducts.ArrProductID, dbo.tblArrProducts.ArrangementID, dbo.tblArrProducts.ContainerTypeID, dbo.tblArrProducts.Quantity  FROM         Sourcing.PermittedArrangements INNER JOIN                        dbo.tblArrProducts ON Sourcing.PermittedArrangements.ArrangementID = dbo.tblArrProducts.ArrangementID    " failed with the following error: "Invalid object name 'Sourcing.PermittedArrangements'.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.  

MySQL Tables Require Daily Repairs - Server, Table or HD?

Posted: 26 Sep 2013 05:23 PM PDT

I've been experiencing a weird issue with one of MySQL DBs. Every day, sometimes 2-3 times per day, I'll need to repair the tables. The MySQL DB has 25 tables with 5.6m rows in total.

The bigger ones are:

Table A - 599k rows / 867MB  Table B - 2.1m rows / 146MB  Table C - 2.2m rows / 520MB  

It seems table C needs to be repaired pretty frequently, Tables A & B not as much.

When the table needs to be repaired, I'm not seeing it being marked as crashed or in use. But through other tools, I can see the data is not what it should be.

When I do repair the table, I'll see a message similar to:

[table c] repair info Wrong bytesec:  54-55-102 at 368251940; Skipped  [table c] repair warning Number of rows changed from 2127934 to 2127931  

or

[table c] repair info Wrong bytesec:  171-30-101 at 341237312; Skipped  [table c] repair warning Number of rows changed from 1984585 to 1984582  

I've tried making adjustments in my.cnf but no difference.

The server is a cloud server running both MySQL and Apache. Plenty of space available on all HDs:

Filesystem            Size  Used Avail Use% Mounted on  /dev/xvda2             99G   14G   80G  15% /  tmpfs                 1.9G     0  1.9G   0% /dev/shm  /dev/xvda1             97M   49M   44M  53% /boot  /dev/xvdc1            296G   25G  257G   9% /data  

I'm not sure if this is a problem with the cloud HD, the server or the tables themselves. The problem didn't start happening until about 2 months ago and the size of the DB has only changed by 300-400MB until now.

Any idea what I should be looking at to verify where the problem might be?

Using MySQL v5.1.66 and MyISAM

Thanks in advance.

Best, Cent

Azure SQL Administrator Can't Access Master Database

Posted: 26 Sep 2013 12:23 PM PDT

I created an SQL server and database. I can log in to the database manage page with my administrator account, and manage the database I created, but when I try to look at the master database, it says:

Failed to create context. Details: The current user does not have VIEW DATABASE STATE permissions on this database.

What am I doing wrong? I couldn't find anything else like this on the internet.

I log in with the account that the sql server page on the azure managing portal says is the "Administrator Login".

Constructing an SQL query with time intervals

Posted: 26 Sep 2013 03:23 PM PDT

I've a simple but large table in an SQL database which has four fields (id(int PK), time (unix timestamp, not null), value (double, not null) and an ID_fk (integer foreign key).

Data is recorded every minute, but at a seemingly random second for subsets of sensors, each sensor has a foreign key value though.

My question is this, I need to write a query that will tell me when, over all two minute intervals in the dataset, a series of sensors have a value greater than 0.9.

I was thinking for trying to create a temporary table and do it that way, simplifying the time by only storing it at minute resolution?

Any advice would be greatly received, Thanks, James

Mysqldump tables excluding some fields

Posted: 26 Sep 2013 07:23 PM PDT

Is there a way to mysqldump a table without some fields?

Let me explain:
I have a MySQL database called tests. In tests I have 3 tables: USER, TOTO and TATA. I just want to mysqldump some fields of table USER, so excluding some fields like mail, ip_login, etc.

How can I do this?

Sybase SQL Anywhere 12 - Get all indexes which are unique -> ambigious sysindexes error

Posted: 26 Sep 2013 01:23 PM PDT

we are using a Sybase SQL Anywhere 12 db.

In the db there are indices, which are unique, but shouldn't be unique.

Therefore I search for a quick way to list all tables with unique primary keys.

I tried

SELECT z.name FROM sysobjects z JOIN sysindexes ON (z.id = i.id) WHERE type = 'U'  

The result was an error message: Anweisung konnte nicht ausgeführt werden. Tabellenname 'sysindexes' ist mehrdeutig SQLCODE=-852, ODBC 3-Status="42S13" Zeile 1, Spalte 1

Roughly translated: sysindex is ambiguous.

I found on internet the query:

select 'Table name' = object_name(id),'column_name' = index_col(object_name(id),indid,1),  'index_description' = convert(varchar(210), case when (status & 16)<>0 then 'clustered' else 'nonclustered' end  + case when (status & 1)<>0 then ', '+'ignore duplicate keys' else '' end  + case when (status & 2)<>0 then ', '+'unique' else '' end  + case when (status & 4)<>0 then ', '+'ignore duplicate rows' else '' end  + case when (status & 64)<>0 then ', '+'statistics' else case when (status & 32)<>0 then ', '+'hypothetical' else '' end end  + case when (status & 2048)<>0 then ', '+'primary key' else '' end  + case when (status & 4096)<>0 then ', '+'unique key' else '' end  + case when (status & 8388608)<>0 then ', '+'auto create' else '' end  + case when (status & 16777216)<>0 then ', '+'stats no recompute' else '' end),  'index_name' = name  from sysindexes where (status & 64) = 0  order by id  

Which looked what i wanted. But there was still the same result of ambigious sysindexes.

What dows ambigious indexes mean in this context? Will/Can this cause any error in future?

As workaround I used sybase central (which by the way opens always on first monitor, not on the one where it was closed - ugly behaviour), and found that a item indices showed what i searched for.

But I still want to know how a programmatically solution looks like.

Mongo connection failing with 'Transport endpoint is not connected'

Posted: 26 Sep 2013 06:23 PM PDT

I have a 2 server installation.

A web server with Apache and a DB server with MongoDB.

I am load testing it, and on ~300 RPS I am getting this error:

PHP Fatal error: Uncaught exception 'MongoConnectionException' with message 'Transport endpoint is not connected'.

The only thing I am noticing is that right before the fail, I am getting a lots of connections on Mongo:

insert  query update delete getmore command flushes mapped  vsize    res faults locked % idx miss %     qr|qw   ar|aw  netIn netOut  conn       time        0    659      0      0       0       1       0   208m  1.28g    40m      0        0          0       0|0     0|0    62k   217k   486   03:57:20   

Almost 500 connections here... but never more than that!

Mongo is 2.0.3. PHP is 5.3.x (latest of Debian install...)

Help!

No comments:

Post a Comment

Search This Blog