Monday, September 30, 2013

[how to] Monitor Usage on a SSAS 2012 Tabular Model

[how to] Monitor Usage on a SSAS 2012 Tabular Model


Monitor Usage on a SSAS 2012 Tabular Model

Posted: 30 Sep 2013 07:15 PM PDT

I have a SSAS 2012 tabular model and I am trying to figure out the most efficient way to track which users are accessing my model. My tabular model is on a server that is used by other databases, so system wide tracing and logging are not options that I'd like to implement as I don't want to affect the performance on the server for those other databases. I haven't been able to find out much while searching online and am still pretty new to tabular models.

Has anyone implemented something similar to what I am looking for or has any ideas?

Unattended SQL install on a remote machine

Posted: 30 Sep 2013 06:54 PM PDT

My product installation does an unattended install of SQL (SQLExpress by default). I use command line arguments rather than a configuration file, so I'm quite familiar with this process.

This unattended install occurs on the same machine that my install media is running on. However I need to look at installing the SQL instance on a separate machine to what the install media is running on. The target machine will be on the same network.

Is it possible to do this unattended? If so, is it simply extra command line arguments or is it a different process (i.e. a PowerShell script)?

(Note: can someone with sufficient rep please add the tag for me, as I cannot yet?)

SSDT/SSIS 2012 - How to automatically load some projects?

Posted: 30 Sep 2013 03:37 PM PDT

I did not use the default projects folder in SSDT, ie SSIS 2012. I used my own instead. Every time I open SSIS, I have to load the projects manually. Is there any way I can load them automatically ?

Audit table insert restriction

Posted: 30 Sep 2013 02:36 PM PDT

So I'm trying to create my own audit table for some user management analysis and stumbled across a little problem. I use this trigger

CREATE OR REPLACE   TRIGGER usr_t_audit_user_alnd      AFTER LOGON ON DATABASE      BEGIN          INSERT INTO usr_t_audit_user (username, session_id, logon_day, logon_time)          VALUES(UPPER(USER), SYS_CONTEXT('USERENV','SID'), SYSDATE, TO_CHAR(SYSDATE, 'HH24:MI:SS'));      END;  

to insert some standard informations into my table. It works fine, except the logoff time inside my other trigger isn't always that correct. But my actual problem is, that I don't want to insert all user logons. For example I only want to audit some chosen users. How can I achieve such a restriction? I can't put a WHERE-clause in my trigger. Anybody got some ideas?

partitioning by datetime vs date vs int performance

Posted: 30 Sep 2013 04:06 PM PDT

I am trying to determine which partitioning strategy will be most efficient for my queries on SQL Server.

We know we want daily partitions. The data has a natural DATETIME field in it, so my gut instinct is to partition on that value. However, we have discussed the possibility of adding a DATE column (with just the time information stripped out), or possibly coding the date as an integer (YYYYMMDD, e.g., 20130930) and using that as the partitioning column.

All queries on this data will be on a specific date (WHERE ItemTime = '2013-09-30') or range of dates (WHERE ItemTime BETWEEN '2013-09-15' AND '2013-09-30'). For now, we always query on a date, but future requirements may include time details (WHERE ItemTime BETWEEN '2013-09-29 20:30:00' AND '2013-09-30 10:15:45').

I have tested performance of each strategy on a few hundred thousand rows of data, and seen no real difference. The production deployment, however, will be in the hundreds of millions up to maybe a couple billion rows.

Is one of these strategies going to lead to more efficient queries than the others? Why or why not?

Thanks for your help.

[EDIT] The queries will be formatted by application code, so I'm not concerned about how to translate between DATETIME, DATE, and INT. I can assume the query will be properly formatted based on whichever partitioning scheme is chosen. I'm just trying to find out if partition elimination will be faster using one of these data types.

How to concatenate multi row sql output

Posted: 30 Sep 2013 05:39 PM PDT

Version Information:

Oracle Database 11g Release 11.1.0.7.0 - 64bit Production  HP Server Automation v9.14  

I am creating a report of all my HP-SA managed servers. The report includes data from multiple tables. With the exception of Networking information everything is included on one row. I'm trying to come up with a method to concatenate the NIC NAME, MAC, IP, DNS so that I don't get multiple rows for each active NIC.

I ran into roadblocks with sub select queries not knowing how to pass the D.DVC_ID from the outer query into the sub query. I briefly tried using WM_CONCAT and LISTAGG without success. Any help would be greatly appreciated

Posting the full SQL in case any other HPSA users out there could benefit from knowing where in the database to look for some basic hardware information:

SELECT D.SYSTEM_NAME AS "HOSTNAME",      D.DVC_DESC AS "OPSWARE NAME",      I.HW_ADDR AS "MAC ADDRESS",      I.SLOT AS "INTERFACE",      I.IP_ADDRESS AS "IP ADDRESS",      I.NETMASK AS "SUBNET",      D.DEFAULT_GW AS "GATEWAY",      D.DVC_MFG AS "SERVER VENDOR",      D.DVC_MODEL AS "SERVER MODEL",      CONCAT (          ROUND(MEM.QUANTITY / 1024 / 1024, 1),          ' GB'          ) AS "RAM",      CPU.CPU_MODEL AS "CPU TYPE",      COUNT(CPU.CPU_SLOT) AS "CPU QUANTITY"  FROM TRUTH.DEVICES D  INNER JOIN TRUTH.MEMORY_COMPONENTS MEM ON (D.DVC_ID = MEM.DVC_ID)  INNER JOIN TRUTH.CPU_COMPONENTS CPU ON (D.DVC_ID = CPU.DVC_ID)  INNER JOIN TRUTH.DEVICE_ROLES DR ON (D.DVC_ID = DR.DVC_ID)  INNER JOIN TRUTH.INTERFACES I ON (D.DVC_ID = I.DVC_ID)  INNER JOIN TRUTH.CUSTOMER_CLOUDS CC ON (DR.CUST_CLD_ID = CC.CUST_CLD_ID)  INNER JOIN TRUTH.DATA_CENTERS DC ON (DC.DATA_CENTER_ID = CC.DATA_CENTER_ID)  WHERE MEM.MEMORY_TYPE != 'SWAP'      AND I.ENABLED_FLG = 'Y'  GROUP BY D.SYSTEM_NAME,      D.DVC_DESC,      I.HW_ADDR,      I.SLOT,      I.IP_ADDRESS,      D.DVC_MFG,      D.DVC_MODEL,      MEM.QUANTITY,      CPU.CPU_MODEL,      I.NETMASK,      D.DEFAULT_GW  ORDER BY HOSTNAME ASC,      INTERFACE ASC  

With a sample result of:

HOSTNAME         OPSWARE NAME  MAC ADDRESS        INTERFACE                IP ADDRESS      SUBNET         GATEWAY       SERVER VENDOR  SERVER MODEL             RAM   CPU TYPE                                CPU QUANTITY    ---------------  ------------  -----------------  -----------------------  --------------  -------------  ------------  -------------  -----------------------  ----  --------------------------------------  ------------    WIN-6V18DDNQRL4  Server1       00:50:56:BB:66:A2  Local Area Connection    172.31.0.149    255.255.255.0  172.31.0.10   VMWARE, INC.   VMWARE VIRTUAL PLATFORM  4 GB  Intel(R) Xeon(R) CPU  E5345  @ 2.33GHz  2               WIN-6V18DDNQRL4  Server1       00:50:56:BB:03:D3  Local Area Connection 2  169.254.246.97  255.255.0.0    172.31.0.10   VMWARE, INC.   VMWARE VIRTUAL PLATFORM  4 GB  Intel(R) Xeon(R) CPU  E5345  @ 2.33GHz  2               

Desired Output would have everything to do with the server on one row similar to the following:

HOSTNAME         OPSWARE NAME  (MAC ADDRESS/INTERFACE/IP ADDRESS/SUBNET/GATEWAY)                                                                                                                 SERVER VENDOR  SERVER MODEL             RAM   CPU TYPE                                CPU QUANTITY    ---------------  ------------  ----------------------------------------------------------------------------------------------------------------------------------------------------------------  -------------  -----------------------  ----  --------------------------------------  ------------    WIN-6V18DDNQRL4  Server1       (00:50:56:BB:66:A2/Local Area Connection/172.31.0.149/255.255.255.0/172.31.0.10,00:50:56:BB:03:D3/Local Area Connection 2/169.254.246.97/255.255.0.0/172.31.0.10)  VMWARE, INC.   VMWARE VIRTUAL PLATFORM  4 GB  Intel(R) Xeon(R) CPU  E5345  @ 2.33GHz  2               

Thank you, Lars

Query to compare two subsets of data from the same table?

Posted: 30 Sep 2013 02:01 PM PDT

My problem:

I'm trying to compare to subsets of data within one table, and I've got two methods that partially work and one certainty that there's got to be a more correct way to do it.

The idea here is a table which contains datasets about the same systems over time, and I would like to compare them and especially to see when there are introductions or absences. Allow me to demonstrate with a simple test table:

mysql> select * from gocore;  +-----+------------+------+----------+----------+  | uid | server     | tag  | software | revision |  +-----+------------+------+----------+----------+  |   1 | enterprise | old  | apache   | 2.2.25   |  |   2 | enterprise | new  | apache   | 2.4.6    |  |   3 | enterprise | new  | tomcat   | 7.0.42   |  |   4 | enterprise | old  | geronimo | 2.1.7    |  +-----+------------+------+----------+----------+  

In this example there are two datasets - the tag "old" dataset, and the tag "new" dataset. Each reflects a data sample taken at some point in time. For server "enterprise", we have one software package that changed over time (apache), one software package that was introduced (tomcat), and one software package that became absent (geronimo).

My goal: A query that will allow me to summarize the state between "old" and "new":

+------------+----------+----------+----------+  | server     | software | revision | revision |  +------------+----------+----------+----------+  | enterprise | apache   | 2.2.25   | 2.4.6    |  | enterprise | geronimo | 2.1.7    | NULL     |  | enterprise | tomcat   | NULL     | 7.0.42   |  +------------+----------+----------+----------+  

It is important for my purposes to be able to see the 'NULL' cells above - I need to know when software has been added or removed from the system. TO BE CLEAR, the table above is not the result of a query - it's me using a text editor to fix up what I was getting to describe what I'm looking for. I need your help to figure out the query that would make that table :)

My Kludges:

If I perform a LEFT JOIN and use the WHERE clause to discriminate between "old" and "new" tags, I get results for only those entries that exist under both tags:

mysql> select old.server, old.software, old.revision, new.software, new.revision      -> from gocore as old left join gocore as new on old.software = new.software      -> where old.tag = 'old' and new.tag = 'new';  +------------+----------+----------+----------+----------+  | server     | software | revision | software | revision |  +------------+----------+----------+----------+----------+  | enterprise | apache   | 2.2.25   | apache   | 2.4.6    |  +------------+----------+----------+----------+----------+  

My next try was to create two views so that I could perform the JOIN without throwing the tag filter into the mix:

mysql> create view gc_old as select uid,server,tag,software,revision      -> from gocore where tag = 'old';  Query OK, 0 rows affected (0.00 sec)    mysql> create view gc_new as select uid,server,tag,software,revision      -> from gocore where tag = 'new';  Query OK, 0 rows affected (0.00 sec)    mysql> select * from gc_old;  +-----+------------+------+----------+----------+  | uid | server     | tag  | software | revision |  +-----+------------+------+----------+----------+  |   1 | enterprise | old  | apache   | 2.2.25   |  |   4 | enterprise | old  | geronimo | 2.1.7    |  +-----+------------+------+----------+----------+  2 rows in set (0.00 sec)    mysql> select * from gc_new;  +-----+------------+------+----------+----------+  | uid | server     | tag  | software | revision |  +-----+------------+------+----------+----------+  |   2 | enterprise | new  | apache   | 2.4.6    |  |   3 | enterprise | new  | tomcat   | 7.0.42   |  +-----+------------+------+----------+----------+  2 rows in set (0.00 sec)    mysql> select old.server, old.software, old.revision, new.revision      -> from gc_old as old left join gc_new as new      -> on old.software = new.software;  +------------+----------+----------+----------+  | server     | software | revision | revision |  +------------+----------+----------+----------+  | enterprise | apache   | 2.2.25   | 2.4.6    |  | enterprise | geronimo | 2.1.7    | NULL     |  +------------+----------+----------+----------+  2 rows in set (0.00 sec)  

That works better - I now see absence, not just change, but I still don't see introduction. And I had to go create views for this, which strikes me as inflexible as the dataset is added to over time.

My questions:

  1. How can I make sure that all the holes show up, as represented by the NULL cells in the "My goal" section above?
  2. Can I do it within a single query, avoiding the creation of views as a crutch?

Any help you can provide is greatly appreciated. Thanks!

MySQL create all rows for (over) normalized schema at start or as data is available

Posted: 30 Sep 2013 01:16 PM PDT

I am developing an app, and using MySQL and the db schema is provided. To me the db looks (over)normalized. Data for a single user will be spread over dozen tables, and around 150 rows in total. Some data will be available on user signup (around 40 rows), the rest will be added overtime as all the data is needed to properly use the app. I am creating a stored procedure for creating new users. So I was thinking that when creating a user should I create ALL the rows with either available data or NULL values? or just create the rows whose data is available? Will there be any performance bonus when all the rows for a user are together? for example 1 table will have exact 11 rows for each user, but when user joins there exactly 8 rows will be populated. The other 3 rows will be added after a while. So should I create those 3 rows with NULLs? In the app we will be displaying data for all the available rows together. So if all the rows a query have to return are together will it provide noticeable performance?

EDIT Its basically a match-making app. Tables are stripped down to give you overview.
User Table

CREATE TABLE IF NOT EXISTS `tbl_user` (  `id_user` int(10) unsigned NOT NULL AUTO_INCREMENT,  `email` char(60) NOT NULL,  ....  PRIMARY KEY (`id_user`),  UNIQUE KEY `email` (`email`),  );  

Thn there is a basic info table which has a dozen questions.

CREATE TABLE IF NOT EXISTS `tbl_criteria` (  `id_criteria` int(10) unsigned NOT NULL AUTO_INCREMENT,  `criteria_text` char(200) NOT NULL,  ....  PRIMARY KEY (`id_criteria`)  );  

and thn we have an intersect table

CREATE TABLE IF NOT EXISTS `tbl_user_criteria` (  `id_user` int(10) unsigned NOT NULL,  `id_criteria` int(10) unsigned NOT NULL,  `id_answer` tinyint(3) unsigned NOT NULL,  PRIMARY KEY (`id_user`, `id_criteria`)  );  

and there are like dozen more intersection tables.

MS Sql Server Management Studio gives error on start

Posted: 30 Sep 2013 01:41 PM PDT

When i try to connect my sa account on sql server management studio it gives me this error:

*ps: the error message is turkish but you will see english version in 3rd image

enter image description here

Then i clicked on "Get Help For" thing as its shown above. It showed me this:

enter image description here

So i clicked to "Yes" button. Then it opened web page below:

*ps: error message is the same in first image but its english.

enter image description here

After that, i tried to start sql server process in services windows regarding 3rd image. But it gave me this error message:

It means "Windows couldn't start SQL SERVER service on local computer. For more information see event logs. If this is not a Microsoft service, call provider and give this special error code: 3417"

I wanted to look system event logs from C:\Windows\System32\winevt\Logs directory but there was 84 different .evtx files, I couldn't know what to do. How can i fix this?

*ps: I'm also not little bit confused about this problem's scope. I also thought to post this on superuser and/or stackoverflow but this place looked more convenient to me. So, if i'm mistaken please warn me.

enter image description here

edit:

I opened Event Log application in Control Panel\Administrative Tools according to Jon Siegel's suggestions and filtered all event logs from SQL Server Reporting Services (MSSQLSERVER) source. There are 82 events from "SQL Server Reporting Services (MSSQLSERVER)" But they're all information, their event id is 0. And when i right clied on them and chose Event Properties, here is the dateils tab:

*ps: After EventData, It says "Service started successfully"

enter image description here

SQL Server full text indexing disabled

Posted: 30 Sep 2013 12:26 PM PDT

I have a SQL Server 2008 R2 server instance where I'm unable to use full text catalogs. It was upgraded from SQL Server 2008. When I go to the database properties of an existing database or try to create a new database the "Use full-text indexing" checkbox is disabled.

Some things that I have checked are:

  • Full text search service is installed and started
  • Named pipes protocol is enabled
  • Full text service is running under local system account
  • EXEC sp_fulltext_database 'enable' does run successfully

What else could go wrong, because I'm stuck at it?

How do I access the build configuration in SSDT, ie SSIS 2012? [on hold]

Posted: 30 Sep 2013 12:21 PM PDT

Oracle database on different disk

Posted: 30 Sep 2013 11:47 AM PDT

I have a VM with Oracle Server installed, now Oracle home is defined locally. The VM's disk is no longer expandable because of snapshots (VMWare Workstation claims expansions and snapshots don't play nice together). I had to create a new virtual disk to be able to host a dmp file given to me which is just over 70GB.

With that said, how can I import this dmp onto the new disk?

The current Oracle is C:\Oracle and the new drive is d:

I have minimal experience with Oracle so I would appreciate a detailed answer.

Thanks

Multiple versions of SQL Server on the same development machine

Posted: 30 Sep 2013 10:38 AM PDT

I am consolidating some SQL Servers to simplify administration. I am thinking about a new VM with SQL Server 2005/2008R2/2012 each as a separate named instance on the same machine. This machine will only be used in a 'sandbox' environment primarily to test restores of our backups. No applications will be hitting this.

Am I likely to run into any issues with multiple versions? I remember running into issues years ago with SQL 2000/2005 on the same box. Do you folks use separate machines for restore testing?

How to partition a table by timestamp, where data from every i'th day goes to partition i?

Posted: 30 Sep 2013 12:14 PM PDT

I'm looking into partitioning a table in my InnoDB database. I have a column corresponding to a UTC timestamp, and I want to partition around that.

The basic idea that I want to capture is a partitioning scheme where, with a fixed number of partitions, we cycle through them as follows:

Given 3 partitions (3 for simplicity)

  • Data with timestamp column from day 1 goes into partition 1
  • Data with timestamp column from day 2 goes into partition 2
  • Data with timestamp column from day 3 goes into partition 3
  • Data with timestamp column from day 4 goes into partition 1
  • ..... day 5 .... partition 2
  • ... rinse and repeat

Basically, extract the day out of the timestamp and put the row into partition DAY MOD N where DAY is the day that the timestamp corresponds to (filtering out hours/minutes/seconds) and N is the number of partitions.

This is probably easily done, but I can't find a similar example to emulate this with. What would the ALTER TABLE query be to partition in this fashion?

Update

By atxdba's suggestion, I tried partitioning by hash. I tried the following statement:

ALTER TABLE table_to_partition PARTITION BY HASH(DAYOFMONTH(FROM_UNIXTIME(my_timestamp))) partitions 8;  

This results in error code 1564: This partition function is not allowed. Looking at this list of limitations for partitioning, it doesn't appear that FROM_UNIXTIMESTAMP is supported for partioning a table, so a different transformation from timestamp to date is required.

I have to create INDEX but which? [on hold]

Posted: 30 Sep 2013 11:41 AM PDT

I have to create INDEX but which with this query ?

SELECT *  FROM TABLE  WHERE A = 1    AND B = '5427dbfac0a8280d74bc672c72636a4f'    AND C = 'BIL'    AND D = 'B'    and E = '01'  GROUP BY F,G,E,H,I  ORDER BY B,C,E,G,H,I;  

How can I synchronize data between two RDS MySQL databases

Posted: 30 Sep 2013 05:48 PM PDT

I have 2 Amazon RDS MySQL instances, one is MySql v5.5 and the new one is 5.6. I want to upgrade the 5.5 to the 5.6 but you cannot do upgrades at this time, so we are trying to migrate the data. I did a dump of the 5.5 and have imported the dump into the 5.6 but since I need to keep our down time to a minimum, I decided to try to do a data synchronization between the two.

What are my options for this?

  • I have a few tables that are huge! (1-8gb)
  • Tables are all innodb
  • Dump download took 3 hours
  • Dump Upload took 4+ hours

I need to try to stay under 5 hours total!

Wrong error in SSIS - [Execute SQL Task] Error: There is an invalid number of result bindings returned for the ResultSetType: “ResultSetType_Rowset”

Posted: 30 Sep 2013 02:50 PM PDT

I have an Execute SQL task which gives returns only one row with one column - a number. I set my result set to single row. Despite that, the task fails. Why ? How do I fix it ?

[Execute SQL Task] Error: There is an invalid number of result bindings returned for the   ResultSetType: "ResultSetType_Rowset".  

storing arrays or images in a database table

Posted: 30 Sep 2013 06:06 PM PDT

I'm a beginner with both database design and use of forums such as this one so forgive me if I forget to provide some relevant information below.

Basically I'm trying to store a set of arrays, in this case raster images, of 2500*2500 in a database in order to extract timeseries for individual pixels. I currently have roughly 1500 images and every day an image is added.

The original raw files of bytetype are only small, but my database table grows very quickly in the following form:

     pixelnr, date, rastervalue        PK clustered (pixelnr,date)  

I understand that this is because I need a pixelnr and date for every pixelvalue. Unfortunately I don't know how else to store the data in a database.

I've tried to optimize the table for select statements of timeseries per pixel with a clustered PK on pixelnr,date. The drawback is that inserts of new images take an increasingly long time.

There are only two things I do with this table: - bulk insert from a csv file. I convert a new image to a CSV file with the same form as the table. It will have all pixels, but only for a single date.

  • run simple select queries to derive time series for a number of pixels, usually up to 25000 pixels:

    SELECT Pix,Date,Val  FROM FullTable INNER JOIN PixelSubset ....  

I now have two questions:

1) Is there a more efficient way to store this kind of data? I've not found much useful on storing arrays in an RDBMS.

2) How to improve the performance of bulk inserts (all pixels for a single date) into the the table while keeping the clustered index optimised as described above.

I use SQL server 2012 developer edition on a PC with 8GB RAM and a RAID system. There's not much room for hardware changes....

Any pointers particularly to more efficient storage of such data in a RMDBS would be greatly appreciated!

How can I integrate a new system to a design scheme I previously created?

Posted: 30 Sep 2013 02:13 PM PDT

This is a follow-up to a question I wrote a while ago, for reference, you can look it up here.

I've developed a curriculum based database scheme that looks like this(Thanks Joel Brown) enter image description here

Now that a new educational system is introduced(while the old one is still not "deprecated") I have been looking for a way to integrate both curriculum systems together without creating a whole new different system; As the two systems are fundamentally different, this have been quite a challenge for me.

Here's a picture describing the new system . . .

enter image description here

(English level 1, 2, and 3 are classified as courses)

See, the new system removes the whole grades system. A jr. high school student can enroll in the exact same course(English level 3, for example) as a sr. high school student.

Is there a way to make those two systems work with my current design scheme? Or should I create a whole new scheme for the new system?

Targeting MSX with job on Multi-Server Administration

Posted: 30 Sep 2013 02:24 PM PDT

I am using Multi-Server Administration to execute jobs on multiple targets. This works well, but I am not able to enlist the master as a target. I receive the following error when attempting to enlist the MSX as a TSX:

Server 'XXXXX' is an MSX. Cannot enlist one MSX into another MSX. (Microsoft SQL Server, Error: 14299)

I would think that running these same jobs on the master would not require managing a local job as well as the multi-server jobs.

Any help is appreciated.

FileWatcher Troubleshooting Options

Posted: 30 Sep 2013 11:24 AM PDT

I have setup a Credential/Procedure/Program/File Watcher/Job to monitor a folder for new files, but it is not working. I am trying to figure out what I can check to troubleshoot why this isn't working. After scaling back the code to a bare minimum, here are somethings I have already done.

  • Verify that files are being created in the folder monitored - They are.
  • Verify that the procedure can be run by itself - It can.
  • Verify that the File Watcher shows up in DBA_Scheduler_File_Watchers and is enabled. - It does and is.
  • Verify that DBA_Scheduler_Job_Run_Details shows a successful execution - It does NOT show any entries for this job.
  • Check the alert log - Nothing interesting.
  • Check for other trace files - I am getting trace files mattching *_j000_*.trc, but they just have this: FILE_TRANSFER error is: with nothing after it.

Could someone explain what I am doing wrong or give me a way to troubleshoot this further? Here is my test code:

--Create Credential.  BEGIN      sys.dbms_scheduler.create_credential(          username => 'oracle',          password => 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxx',          database_role => NULL,          windows_domain => NULL,          comments => NULL,          credential_name => 'TestCredential'      );  END;  /      CREATE TABLE FilesFound(FoundDate Date);      CREATE OR REPLACE PROCEDURE TestProcedure(iFileInfo In sys.scheduler_filewatcher_result) AS  BEGIN     INSERT INTO FilesFound VALUES (sysdate);     COMMIT;  END;  /      BEGIN     DBMS_SCHEDULER.create_program(        program_name => 'TestProgram',        program_type => 'stored_procedure',        program_action => 'TestProcedure',        number_of_arguments => 1,        enabled => False);     DBMS_SCHEDULER.define_metadata_argument(        program_name => 'TestProgram',        metadata_attribute => 'event_message',        argument_position => 1);  END;  /        BEGIN  dbms_scheduler.create_file_watcher(     file_watcher_name => 'TestFileWatcher',     directory_path => '/u01/test',     file_name => '*.*',     credential_name => 'TestCredential',     destination => NULL,     enabled => False);  END;  /      BEGIN     dbms_scheduler.create_job(     job_name => 'TestJob',     program_name => 'TestProgram',     event_condition => NULL,     queue_spec => 'TestFileWatcher',     auto_drop => False,     enabled => False);       --Enable Everything.     dbms_scheduler.enable('TestProgram, TestFileWatcher, TestJob');  end;  /    --Set a one minute check interval.  BEGIN    DBMS_SCHEDULER.set_attribute(      'file_watcher_schedule',      'repeat_interval',      'freq=minutely; interval=1');  END;  /    --Create a file.  DECLARE    vFile utl_file.file_type;  BEGIN    EXECUTE IMMEDIATE 'create or replace directory TESTDIRECTORY as ''/u01/test''';    vFile := utl_file.fopen('TESTDIRECTORY', 'TestFileWatcher.txt', 'w', NULL);    utl_file.put_line(vFile, 'File has arrived '||SYSTIMESTAMP, TRUE);    utl_file.fclose(vFile);  END;  /      --Delay to give time for file to appear.  BEGIN     DBMS_LOCK.SLEEP(120);  END;  /    --Check the table.  BEGIN     FOR vx IN (select count(*) ct from sys.FilesFound) LOOP        If (vx.ct = 0) Then           DBMS_Output.Put_Line('Failed - No File Watcher Activity.');        Else           DBMS_Output.Put_Line('Success - File Watcher Activity Found.');        End If;     END Loop;  END;  /    --Cleanup.  EXECUTE dbms_scheduler.drop_job(job_name => 'TestJob');  EXECUTE dbms_scheduler.drop_program(program_name => 'TestProgram');  EXECUTE dbms_scheduler.drop_file_watcher(file_watcher_name => 'TestFileWatcher');  EXECUTE DBMS_SCHEDULER.drop_credential(credential_name => 'TestCredential');  drop table FilesFound;  drop procedure TestProcedure;  drop directory TestDirectory;  

What if I want to keep using MyISAM

Posted: 30 Sep 2013 10:31 AM PDT

In my current use of MyISAM the databases are extremely small and kept for a few weeks after which time they are deleted and completely rebuilt with a clean slate. The script uses MYISAM tables and other than lack of support in the latest MYSQL versions, I have never had a problem with the way they work.
Changing to INNODB is something that would be time consuming and provide a zero benefit. However, I am concerned that eventually there may come a time when either the Older MySQL version is not available or some other process forces the move.

My question is: Does anyone know of a specific reason to change from MyISAM to MyinnoDB just to keep up with MySQL.

If I can keep using what works for my own specific use, I see no reason to worry with testing and possible errors when there are more pressing issues that do need attention that have problems now. MyISAM does not have a problem at this time.

restrict user host settings to socket connection only

Posted: 30 Sep 2013 01:24 PM PDT

Is there a way to restrict a user's host setting so that (s)he can only connect by socket and not TCP/IP? I'd like to have this setting for the root account.

edit: As Abdul Manaf pointed out skip-networking can be used to turn off TCP/IP connectivity altogether. But can it be done on a user account basis?

Unable to create a new listener in oracle 11g

Posted: 30 Sep 2013 03:24 PM PDT

In net manager i get an error message "A valid logfile is required" when i click on listener and choose '+' button. and then it keeps on giving me that error when i click on any other tab or button in net manager.

Grant access to a table to all users

Posted: 30 Sep 2013 10:24 AM PDT

Is it possible to assign a grant to a table for all users, or a default permission so that when new users are created they will have the specific grants for that table to SELECT, UPDATE, INSERT and DELETE?

Performance difference between MySQL and PostgreSQL for the same schema/queries

Posted: 30 Sep 2013 07:24 PM PDT

I'm a newbie DBA, and I have experience in Microsoft SQL Server but I want to jump to FLOSS.

I'm starting a company, and we develop an app (PHP) with a Postgres backend, and we did some tests comparing with MySQL too. We observe that MySQL is twice as fast as PostgreSQL.

I did a tangible performance test:

  • Same columns in table with equivalent column datatypes.
  • Same number of rows.
  • Same indexes in both (primary key included).
  • The CPU load are idle and Postgres machine it's significantly better.
  • And the same query (obviously).

What am I doing wrong?

P.S: I read many "howtos" on performance tuning for database engines.
P.S(2): We're using InnoDB (one file per table) on the MySQL database.


Hi Mat!

I did the three common select (and hardest) queries.

The question about disk, certainly it's not the same; In Postgres it's a SSD (almost three time fastest).

MySQL cache data:

+------------------------------+----------------------+  | Variable_name                | Value                |  +------------------------------+----------------------+  | binlog_cache_size            | 32768                |  | have_query_cache             | YES                  |  | key_cache_age_threshold      | 300                  |  | key_cache_block_size         | 1024                 |  | key_cache_division_limit     | 100                  |  | max_binlog_cache_size        | 18446744073709547520 |  | query_cache_limit            | 1048576              |  | query_cache_min_res_unit     | 4096                 |  | query_cache_size             | 16777216             |  | query_cache_type             | ON                   |  | query_cache_wlock_invalidate | OFF                  |  | table_definition_cache       | 256                  |  | table_open_cache             | 64                   |  | thread_cache_size            | 8                    |  +------------------------------+----------------------+  

I don't know how to view this in PostgreSQL.

Thanks in advance.

NoSQL approach: design table "scheme"

Posted: 30 Sep 2013 01:49 PM PDT

We need to store simple log data in a database, but we also need to get and filter data in realtime, based on 'user', simple 'tag' (ie. redis, memcache, php, mysql, etc.) and timestamp. We need to scale horizontally and real fast data access on billions rows.

In a SQL approach, table can be like this:

ID | timestamp | tag | user       | log text  1  | 19543532  | 1   | root       | { text log }  2  | 19543532  | 3   | redis-user | { text log }  

where tag 1 and 3 are different and related to another table (ie. tag_id | tag_name). I think this is a relational approach and we can create three index (timestamp, tag and user) in order to speed up data access.

What is a good practice to reproduce this in a NoSQL database like DynamoDB (AWS) where we can create only HASH or RANGE index? Does a SQL database fit better than a DynamoDB?

My first attempt is:

First table: ID hash index

ID | log text  1  | { text log }  2  | { text log }  

Second table: USER, TIMESTAMP range index

user | timestamp | id  root | 123145122 | 1  redis| 123491241 | 2  

Third table: TAG index

tag        | id  debug      | 1  production | 2  

Thank you in advice!

User has no password set

Posted: 30 Sep 2013 03:07 PM PDT

I see these when running MySQLTuner.pl:

-------- Security Recommendations  -------------------------------------------  [!!] User '@debian' has no password set.  [!!] User '@localhost' has no password set.  ------------------------------------------------------------------------------  
  • Is it a security hole?
  • How to fix it?

Thanks

No comments:

Post a Comment

Search This Blog