Wednesday, June 12, 2013

[how to] Get turno.tipo, maquina.patente and operador.nombre and status, complex query

[how to] Get turno.tipo, maquina.patente and operador.nombre and status, complex query


Get turno.tipo, maquina.patente and operador.nombre and status, complex query

Posted: 12 Jun 2013 08:16 PM PDT

I've this DB design I'm trying to build a query where I should get turno.tipo, maquina.patente, operador.nombre and detalle_emisores.revisado` but it's a bit complex to my knowledge. I started this:

SELECT       re . *, t.tipo, m.patente, o.nombre, de.revisado  FROM      sdriving_registros_emisores re          LEFT JOIN      sdriving_turno t ON re.idturno = t.idturno          LEFT JOIN      sdriving_maquina_emisor me ON me.idmaquinaemisor = re.maquinaemisorid          LEFT JOIN      sdriving_maquina m ON me.idmaquina = m.idmaquina          LEFT JOIN      sdriving_operador o ON re.idoperador = o.idoperador          LEFT JOIN      sdriving_detalle_emisores de ON de.idregistros = re.idregistros  WHERE      o.idempresa = 1  

Is that right, any advice?

Error Code : 1248 Every derived table must have its own alias in MySQL

Posted: 12 Jun 2013 08:39 PM PDT

i have problem here, when im trying to select with query below, i got error message

Error Code : 1248 Every derived table must have its own alias

SELECT B.BRANCH_CODE, B.BRANCH_NAME, C.COMPANY_CODE, C.NAME, TSK.DATE_CREATE,    CASE TB.BULK_TYPE      WHEN 1 THEN 'Bulk Transfer'      WHEN 2 THEN 'Bulk Payment'      ELSE 'Payroll'    END AS TRX_METHOD_E,     CASE TB.BULK_TYPE      WHEN 1 THEN 'Bulk Transfer'      WHEN 2 THEN 'Bulk Pembayaran Tagihan'      ELSE 'Pembayaran Gaji'    END AS TRX_METHOD_I,    TB.TOTAL_RECORD,    TB.ACCOUNT_NO,    TSK.TRX_TOTAL,    TC.TOTAL_CHARGE,    DATE(TSK.DATE_TRX) AS DATE_TRX,    TB.REF_ID,    CASE      WHEN TSK.TRX_COUNT_SUCCESS = TSK.TRX_COUNT_ALL THEN 'All Success'      WHEN TSK.TRX_COUNT_FAIL = TSK.TRX_COUNT_ALL THEN 'All Failed'      WHEN TSK.TRX_COUNT_SUCCESS > 0 AND TSK.TRX_COUNT_FAIL > 0 THEN 'Partial Success (' || TSK.TRX_COUNT_SUCCESS || '/' || TSK.TRX_COUNT_ALL || ')'      ELSE 'Pending Execution'    END AS TRX_STATUS_E,    CASE      WHEN TSK.TRX_COUNT_SUCCESS = TSK.TRX_COUNT_ALL THEN 'Berhasil Semua'      WHEN TSK.TRX_COUNT_FAIL = TSK.TRX_COUNT_ALL THEN 'Gagal Semua'      WHEN TSK.TRX_COUNT_SUCCESS > 0 AND TSK.TRX_COUNT_FAIL > 0 THEN 'Berhasil Sebagian (' || TSK.TRX_COUNT_SUCCESS || '/' || TSK.TRX_COUNT_ALL || ')'      ELSE 'Tunggu Eksekusi'    END AS TRX_STATUS_I    FROM CB_TASK_BULKS TB    JOIN CB_TASKS TSK ON TSK.REF_ID = TB.REF_ID    JOIN CB_COMPANIES C ON C.COMPANY_ID = TSK.COMPANY_ID    JOIN CB_BRANCHES B ON B.BRANCH_CODE = C.BRANCH_CODE,  (    SELECT REF_ID, SUM(CHARGE) AS TOTAL_CHARGE    FROM    (      SELECT XTB.REF_ID, SUM(CHARGE) AS CHARGE      FROM CB_TRANSFERS XT      JOIN CB_TASK_BULK_DETAILS XTBD ON XTBD.BULK_DETAIL_ID = XT.BULK_DETAIL_ID      JOIN CB_TASK_BULKS XTB ON XTB.REF_ID = XTBD.REF_ID      GROUP BY XTB.REF_ID        UNION        SELECT XTB2.REF_ID, SUM(CHARGE) AS CHARGE      FROM CB_TRANSFERS_DOM XTD      JOIN CB_TASK_BULK_DETAILS XTBD2 ON XTBD2.BULK_DETAIL_ID = XTD.BULK_DETAIL_ID      JOIN CB_TASK_BULKS XTB2 ON XTB2.REF_ID = XTBD.REF_ID      GROUP BY XTB.REF_ID        UNION        SELECT XTB3.REF_ID, SUM(CHARGE) AS CHARGE      FROM CB_PAYMENTS XP      JOIN CB_TASK_BULK_DETAILS XTBD3 ON XTBD3.BULK_DETAIL_ID = XP.BULK_DETAIL_ID      JOIN CB_TASK_BULKS XTB3 ON XTB3.REF_ID = XTBD.REF_ID      GROUP BY XTB.REF_ID    )    GROUP BY REF_ID  ) TC  WHERE TC.REF_ID = TSK.REF_ID  AND (TSK.TRX_COUNT_SUCCESS > 0 OR TSK.TRX_COUNT_FAIL > 0);  

can somebody help me to find my problem here, i think i've given all tables with his own "name" like TB for CB_BULKS_DATA (example)

Can't Find CustomerRefListID Field in Table in QuickBooks Enterprise Database

Posted: 12 Jun 2013 06:18 PM PDT

I am trying to pull data into a powerpivot worksheet in excel and creating relationships. On my reference sheet it is saying that there that the LisdID field on the Customer Table is Related to the CustomerRefListID Field on the SalesOrder Table. But when I look in the SalesOrder Table I find no trace of anything that resembles that? I am no expert in Database or Database modeling but am looking for advice or suggestions that could lead me in the right direction. Below is the Link to the Reference Table that shows the relationships.

Thanks for the Help

Derek

(http://doc.qodbc.com/qodbc/20/tables/table_relations_us4214.html?trelations_id=22&tn_us=TRUE)

pg_ctl start gives FATAL: role does not exist

Posted: 12 Jun 2013 06:18 PM PDT

On my system, whoami returns Dan. I run initdb with the --username=PgUser argument. Then as OS user Dan I run pg_ctl start on the newly-initialized cluster, and it gives this output:

LOG:  database system was shut down at 2013-06-13 00:54:33 UTC  LOG:  autovacuum launcher started  LOG:  database system is ready to accept connections  FATAL:  role "Dan" does not exist   done  server started  

The server appears to have started up normally. I believe the error occurs because it wants me to run pg_ctl as the OS user named PgUser. What I want to know is:

  • Is that belief correct?
  • Is there a way to specify the user as PgUser without using su? I would prefer not to create a new OS user and would like to use a PostgreSQL user with a different username (I'm doing some code refactoring that makes this more convenient).
  • Is the instance I started crippled in some way because I started it as OS user Dan?

Home rolled gridview search results shows rows that are supposed to be ignored

Posted: 12 Jun 2013 06:17 PM PDT

Note: I tried posting this on Stack Overflow but didn't get any good bites. I'm still researching this but wanted to see if any db czars can assist here.

I'm in the process of trying to troubleshoot some SQL 2012 code in an ASP/C# gridview search that I rolled myself. I have a search box that searches on specific fields but the twist here is that I only want results after a certain row.

In this database I have a sequence command that autogenerates a number in a column called 'record_number' for every entry in this table and only want results after record #500. The entries are created manually via a different page.

Right now my search code is as follows:

"SelectCommand="SELECT [record_number], [column_a], [column_b], [column_c],       [column_d], [column_e], [column_f], [column_g], [column_h],       [column_i], [column_j]     FROM [schema].[table_name] WHERE record_number >= 500     AND ([column_a] LIKE '%' + @column_a + '%')      OR ([column_c] LIKE '%' + @column_c + '%')      OR ([column_e] LIKE '%' + @column_e + '%')     OR ([column_g] LIKE '%' + @column_g + '%')     OR ([column_h] LIKE '%' + @column_h + '%')     OR ([column_j] LIKE '%' + @column_j + '%')"  

I can post my SelectParameters if needed or anything else. The challenge I'm facing is when I run a search for just the letter 'c' leveraging this code, the results that post have a record_number less AND greater than than 500. At this point I'm spinning my wheels as I'm not seeing where the problem lies.

If more information is needed by the masses, please let me know.

EDIT

Forgot this part - I've tried moving around the commands and consolidating the commands but no go.

Im trying to start a db app for a project im doing and keep getting stuck at sqlite3.dll

Posted: 12 Jun 2013 05:43 PM PDT

I have searched google and tried various solutions but keep getting the same messagebox no matter what sqlite3.dll i dl. I keep getting 'the procedure entry point sqlite3_backup_finish could be located in the dynamic link library sqlite3.dll' I have tried everything I can think of and would greatly appreciate any help thanks

Auto-increment key and Foreign Key as composite for relational database

Posted: 12 Jun 2013 03:06 PM PDT

I do not have any experience in databases other than what I have from just playing with workbench and trying to understand how I need to setup this database for a project. I have been searching the internet and I have become increasingly confused as to how I will do what I want to do. Below is what I am trying to accomplish.

I have a need for a MySQL database that will hold patient information. Patient information will be name, id (unique), and so on as well as all of the tests and samples for each test.

  • Each patient can have multiple tests
  • Each test can have multiple samples
  • If I delete a patient then all test and samples are deleted
  • If I delete a test then all samples will be deleted
  • If I delete all samples of one test OR all tests of one patient then the test OR patient should not be deleted.

So far I understand a relational database is what I need. I also understand that I can use foreign keys to link the patient id's for each table as well as the test id for each table. I have also come to the conclusion that I would like data integrity so that I don't have to manually manage deleting and updating the rows. This would also not allow for orphaned rows. I believe this can be done with a composite key but this is where I start getting confused on how to set it up.

I would expect it to work like this (with each line being an insert statement):

Patient Table:

|-------------------|  |Patient ID  |Name  |  |-------------------|  |12345       |ANG   |  |54321       |JUE   |  |-------------------|  

Tests Table:

|----------------------|  |Test ID  |Patient ID  |  |----------------------|  |1        |12345       |  |2        |12345       |  |3        |12345       |  |1        |54321       |  |2        |54321       |  |4        |12345       |  |----------------------|  

Sample Table:

|----------------------------------|  |Sample ID  |Test ID  |Patient ID  |  |----------------------------------|  |1          |1        |12345       |  |2          |1        |12345       |  |3          |1        |12345       |  |1          |2        |12345       |  |1          |1        |54321       |  |1          |2        |54321       |  |2          |2        |54321       |  |2          |2        |12345       |  |----------------------------------|  

Can this be done easily? I also know that this can be accomplished with one or two triggers but I understood that to not handle deletions and the like. Is there another way to do this that is easier to implement? Also, data being written and read from this database is handled by my LabVIEW program.

Linked Server to Mirrored Database Fails

Posted: 12 Jun 2013 01:20 PM PDT

I've setup up a mirrored database on two servers Server1 and Server 2.

On Server3, I'm trying to setup a linked server that will work regardless of which server has the active database.

I'm using this to setup the linked server.

EXEC sp_addlinkedserver    @server = N'MyLinkedServer',    @srvproduct = N'',    @provider = N'SQLNCLI',    @provstr = N'Server=SERVER1;FailoverPartner=SERVER2;Network=dbmssocn',    @catalog = N'MyDatabase';    EXEC master.dbo.sp_serveroption    @server = N'MyLinkedServer',    @optname = N'data access',    @optvalue = N'true';    EXEC master.dbo.sp_serveroption    @server = N'MyLinkedServer',    @optname = N'rpc',    @optvalue = N'false';    EXEC master.dbo.sp_serveroption    @server = N'MyLinkedServer',    @optname = N'rpc out',    @optvalue = N'true';    EXEC master.dbo.sp_serveroption    @server = N'MyLinkedServer',    @optname = N'connect timeout',    @optvalue = N'60';    EXEC master.dbo.sp_addlinkedsrvlogin    @rmtsrvname = N'MyLinkedServer',    @locallogin = NULL,    @useself = N'False',    @rmtuser = N'sqluser',    @rmtpassword = N'sqlpassword';  

As long as MyDatabase on Server1 is the principal, the linked server works. When I manually failover to Server2, the linked server fails.

The error messages are:

Msg 11001, Level 16, State 1, Line 0  TCP Provider: No such host is known.  Msg 18456, Level 14, State 1, Line 1  Login failed for user 'sqluser'.  Msg 4060, Level 11, State 1, Line 1  Cannot open database "MyDatabase" requested by the login. The login failed.  

The 18456 and 4060 error messages repeat until the query times out after 60 seconds.

On Server1, I can see repeated failures trying to login to "MyDatabase". It's failing because Server1 is the mirror. I see no attempts to connect to Server2 which is the principal.

If I recreate the linked server with Server2 as the Primary and Server1 as the Mirror, it will work until I fail back to Server1.

The mirrored servers are SQL Server 2012 and the box I'm trying to create a linked server on is SQL Server 2005, but I also have the same problem when creating the linked server on a SQL Server 2008 R2 box.

I've seen reports that it does work.

What am I missing?

Thank you!

Best practices to encrypt all the data

Posted: 12 Jun 2013 01:18 PM PDT

I'm working in a project, where the client is afraid their own employees (ie.: junior I.T. guys who the client does not trust) querying the database to get sensitive information.

The database is a new SQL Server database, so I have some kind of freedom.

Googling a bit, I landed here at SQL Server Transparent Data Encryption, and wondered if that what I need, and if it is not...

What are the best practices to encrypt all the columns, in all the tables, in a database in order to prevent users for querying the database?

how to verify mysql backup file

Posted: 12 Jun 2013 12:58 PM PDT

Is There any tool to verify the backup taken using mysqldump without manual restoration ?

I have to take full backup daily so it is difficult for me to verify the backups daily by restoring them. As it will take time and also temporary disk space.

Please suggest me how to verify my backup online

'skip rows' in a flat file source does not work as advertised

Posted: 12 Jun 2013 12:46 PM PDT

In SSIS(2012 specifically) there is an option in the Flat File Datasource to skip a number of lines at the beginning of a file. The preview appears correctly. However, when trying to execute the data flow, it appears that those rows are still being validated; so that if they contain invalid data(blank rows), the datasource fails. This prevents using a conditional switch to remove rows, since the data flow never starts.

Is this accurate description of the normal functioning and is there any way to work around it?

Sample data looks like -

COL1|COL2|COL3    abc|123|10  def|456|20  ghi|789|30  

Error Configuration

Is it recommended to run a Oracle schema backup when there are active sessions?

Posted: 12 Jun 2013 12:51 PM PDT

I need to backup two schemas in a Production database (Oracle11gR2) that are being accessed by an application and some other systems. These schemas are quite small and exp will be used for the export.

Are there any best practices or recommendations for backups using exp? I cannot afford to have the users face any interruption during these backups. For example, has anyone experienced the backup utility hang?

Infinite Loop While Running Query

Posted: 12 Jun 2013 05:28 PM PDT

When I run this query from the command line, it continuously loops. I'm not sure why this is occurring. Could anyone explain this to me, and please offer a fix.

I will be using a query similar to this in a Python script and need to ensure I get the expected data.

SELECT `leads`.`status`, `leads_cstm`.`apptdateourtime_c`  FROM leads, leads_cstm  WHERE ((`leads`.`status` = 'Converted')     AND  (`leads_cstm`.`apptdateourtime_c` BETWEEN '2013-06-04 00:00:00'                                                AND '2013-06-05 00:00:00'))  OR    ((`leads`.`status` = 'AppointmentScheduled')     AND  (`leads_cstm`.`apptdateourtime_c` BETWEEN '2013-06-04 00:00:00'                                                AND '2013-06-05 00:00:00'));  

I'm using MySQL, version 5.1.69

Table structure for leads:

 leads | CREATE TABLE `leads` (    `id` char(36) NOT NULL,    `tbl_id` int(11) DEFAULT NULL,    `date_entered` datetime DEFAULT NULL,    `date_modified` datetime DEFAULT NULL,    `modified_user_id` char(36) DEFAULT NULL,    `created_by` char(36) DEFAULT NULL,    `description` text,    `deleted` tinyint(1) DEFAULT '0',    `assigned_user_id` char(36) DEFAULT NULL,    `salutation` varchar(255) DEFAULT NULL,    `first_name` varchar(100) DEFAULT NULL,    `last_name` varchar(100) DEFAULT NULL,    `title` varchar(100) DEFAULT NULL,    `department` varchar(100) DEFAULT NULL,    `do_not_call` tinyint(1) DEFAULT '0',    `phone_home` varchar(100) DEFAULT NULL,    `phone_mobile` varchar(100) DEFAULT NULL,    `phone_work` varchar(10) DEFAULT NULL,    `phone_other` varchar(100) DEFAULT NULL,    `phone_fax` varchar(100) DEFAULT NULL,    `primary_address_street` varchar(150) DEFAULT NULL,    `primary_address_city` varchar(100) DEFAULT NULL,    `primary_address_state` varchar(2) DEFAULT NULL,    `primary_address_postalcode` varchar(20) DEFAULT NULL,    `primary_address_country` varchar(255) DEFAULT NULL,    `alt_address_street` varchar(150) DEFAULT NULL,    `alt_address_city` varchar(100) DEFAULT NULL,    `alt_address_state` varchar(100) DEFAULT NULL,    `alt_address_postalcode` varchar(20) DEFAULT NULL,    `alt_address_country` varchar(255) DEFAULT NULL,    `assistant` varchar(75) DEFAULT NULL,    `assistant_phone` varchar(100) DEFAULT NULL,    `converted` tinyint(1) DEFAULT '0',    `refered_by` varchar(100) DEFAULT NULL,    `lead_source` varchar(100) DEFAULT NULL,    `lead_source_description` text,    `status` varchar(100) DEFAULT NULL,    `status_description` text,    `reports_to_id` char(36) DEFAULT NULL,    `account_name` varchar(255) DEFAULT NULL,    `account_description` text,    `contact_id` char(36) DEFAULT NULL,    `account_id` char(36) DEFAULT NULL,    `opportunity_id` char(36) DEFAULT NULL,    `opportunity_name` varchar(255) DEFAULT NULL,    `opportunity_amount` varchar(50) DEFAULT NULL,    `campaign_id` char(36) DEFAULT NULL,    `birthdate` date DEFAULT NULL,    `portal_name` varchar(255) DEFAULT NULL,    `portal_app` varchar(255) DEFAULT NULL,    `website` varchar(255) DEFAULT NULL,    PRIMARY KEY (`id`),    KEY `idx_lead_acct_name_first` (`account_name`,`deleted`),    KEY `idx_lead_last_first` (`last_name`,`first_name`,`deleted`),    KEY `idx_lead_del_stat` (`last_name`,`status`,`deleted`,`first_name`),    KEY `idx_lead_opp_del` (`opportunity_id`,`deleted`),    KEY `idx_leads_acct_del` (`account_id`,`deleted`),    KEY `idx_del_user` (`deleted`,`assigned_user_id`),    KEY `idx_lead_assigned` (`assigned_user_id`),    KEY `idx_lead_contact` (`contact_id`),    KEY `idx_reports_to` (`reports_to_id`),    KEY `idx_lead_phone_work` (`phone_work`),    KEY `idx_leads_id_del` (`id`,`deleted`)  ) ENGINE=MyISAM DEFAULT CHARSET=utf8 |  

leads_cstm is as follows:

| leads_cstm | CREATE TABLE `leads_cstm` (    `id_c` char(36) NOT NULL,    `buinessdescription_c` varchar(255) DEFAULT NULL,    `businessstatus_c` varchar(100) DEFAULT NULL,    `numberofemployees_c` int(5) DEFAULT NULL,    `numberofyearsinbusiness_c` varchar(3) DEFAULT NULL,    `felonyorbankruptcy_c` tinyint(1) DEFAULT '0',    `encouragedbyagency_c` tinyint(1) DEFAULT '0',    `annualrevenue_c` decimal(26,6) DEFAULT NULL,    `currency_id` char(36) DEFAULT NULL,    `productsservices_c` tinyint(1) DEFAULT '0',    `investmoney_c` tinyint(1) DEFAULT '0',    `fivestar_c` tinyint(1) DEFAULT '0',    `socioeconomicstatus_c` text,    `visamastercard_c` tinyint(1) DEFAULT '0',    `sixreferences_c` tinyint(1) DEFAULT '0',    `investigatedgsa_c` tinyint(1) DEFAULT '0',    `faxedinfopack_c` tinyint(1) DEFAULT '0',    `apptdateourtime_c` datetime DEFAULT NULL,    `apptdatetheirtime_c` datetime DEFAULT NULL,    `apptstatus_c` varchar(100) DEFAULT NULL,    `topic_c` varchar(100) DEFAULT NULL,    `secondaryphone_c` varchar(13) DEFAULT NULL,    `extension_c` varchar(6) DEFAULT NULL,    `timezome_c` varchar(100) DEFAULT NULL,    `cleadsource_c` varchar(255) DEFAULT NULL,    `timeissuedatebegan_c` date DEFAULT NULL,    `timingissuelength_c` varchar(100) DEFAULT NULL,    `oldsmarteam_c` varchar(100) DEFAULT NULL,    `manual_enter_c` tinyint(1) DEFAULT '0',    `suggestedcloser_c` varchar(100) DEFAULT NULL,    `lead_type_c` varchar(100) DEFAULT 'gsalead',    `qualifier_c` varchar(30) DEFAULT NULL,    `secondarycontact_c` varchar(50) DEFAULT NULL,    `siccode_c` varchar(8) DEFAULT NULL,    `pamcoqualifiermain_c` varchar(255) DEFAULT NULL,    `typeofsource_c` varchar(100) DEFAULT NULL,    `l_printed_c` tinyint(1) DEFAULT '0',    PRIMARY KEY (`id_c`)  ) ENGINE=MyISAM DEFAULT CHARSET=utf8 |  

Database associative table

Posted: 12 Jun 2013 01:17 PM PDT

I have a coworker who's insisting on using associative tables to create nullable associations. For example, we have this table :

cm.accounts  ----------  id:int (PK, identity)  ...    sm.services  -------------------  id:int (PK, identity)  ...    cm.account_services  -------------------  fk_account_id:int (FK, cm.accounts.id)  fk_service_id:int (FK, nullable, sm.services.id)  

and he's arguing that creating rows like

INSERT INTO cm.account_services               (fk_account_id, fk_service_id) VALUES               (123, NULL);  

is perfectly fine. I disagree. The purpose of this "design" is to allow a temporary (or future) "relationship" to be created, and be updated later on. It doesn't have to be this way and this is my coworker's decision and not a requirement.

I believe that this is not good practice at all, and simply adds more complexity and undefined behaviours than necessary. I know this question may raise an animated discussion and I'm not looking for an opinion, but facts.

By using references, can an expert help break this tie?

Thanks.

** NOTE **

Just found this paragraph here :

Nullable foreign keys are not considered to be good practice in traditional data modelling [...] This is not a requirement of Hibernate, and the mappings will work if you drop the nullability constraints.

** EDIT **

What I would suggest is something like this :

cm.accounts  ----------  id:int (PK, identity)  ...    sm.services  -------------------  id:int (PK, identity)  ...    cm.account_services  -------------------  fk_account_id:int (FK, cm.accounts.id)  fk_service_id:int (FK, sm.services.id)  link_type:int    // or an enum. i.e. "validated", "deleted", "invalid", etc.  

So we could identify (instead of null values) what kind of relationship the two table rows have, thus giving more verbosity in the business logic, etc.

MySQL - What is the diffrence between ClearText authentication and PAM authentication plugins?

Posted: 12 Jun 2013 04:42 PM PDT

PAM Authentication:

PAM enables a system to use a standard interface to access various kinds of authentication methods, such as Unix passwords or an LDAP directory.

Cleartext Client-Side Authentication Plugin

client-side plugin can be used to send the password to the server in clear text. There is no corresponding server-side plugin.

I have read documentation but could not get enough help, PAM authentication is supported only in MySQL Enterprise edition but cleartext authentication is applicable to all MySQL servers as it's a client who invokes this kind of authentication mechanism.

Questions:

Has anybody practically tried using PAM authentication or Cleartext authentication plugin?

If so please explain how to configure and setup with an example?

Which is the redundant relationship in this ER Diagram and why?

Posted: 12 Jun 2013 12:49 PM PDT

Which is the redundant relationship in this ER Diagram and why? A few of my friends and I have been trying to figure it out but keep going in circles.

enter image description here

where the attributes of each entity are as follows:

Employee: Employee_No, Name, Address and Telephone_No.  Department: Department_No, Name and Location.  Training Course: Course Code, Title, Start_Date and Description.  

Access to tables in SQL Server

Posted: 12 Jun 2013 04:35 PM PDT

I would like to know if there is a table where accesses to tables are saved, per user or per system.

I'm using both SQL Server 2005 and 2008.

User permissions for Microsoft SQL Server

Posted: 12 Jun 2013 03:28 PM PDT

Please advice to give permission for a SQL server user. Conditions are, this user must be able to create,delete,modify,drop the tables/view/procedure on user default schema(not dbo) not in any other schema. I have given db_ddladmin privilege but here the user will be able to perform any action on other schema's as well.

Why don't databases create their own indexes automatically?

Posted: 12 Jun 2013 12:43 PM PDT

I would have thought that databases would know enough about what they encounter often and be able to respond to the demands they're placed under that they could decide to add indexes to highly requested data.

Would Mongo's ObjectID work well in InnoDB's clustered index?

Posted: 12 Jun 2013 01:21 PM PDT

Mongo's ObjectID is defined like this:

ObjectId is a 12-byte BSON type, constructed using:

a 4-byte value representing the seconds since the Unix epoch,

a 3-byte machine identifier,

a 2-byte process id,

and a 3-byte counter, starting with a random value.

Assuming the PRIMARY key was BINARY(12), will this work as well as an auto incremented INT? Does it count as sequential for InnoDB's clustered index? Would gaps in the timestamp be a problem?

InnoDB Failure of some kind

Posted: 12 Jun 2013 01:26 PM PDT

I have MySQL 5.5 installed. I tried to install Joolma but it failed. I went into their sql and replace EGNINE=InnoDB with MyISAM and the install worked.

InnoDB is listed under SHOW ENGINES;

Any idea what the cause is or how to fix this so other InnoDB sites can be used?

I had these errors:

MySQL ERROR No: 1030  MysqL Error: Got error -1 from storage engine  

Am I overengineering MySQL?

Posted: 12 Jun 2013 08:26 PM PDT

On my project, I have to make difference between registered users, and simple visitors. Both of them can set their own properites, specified in accounts and visitors table.

I identify them by cookie, if there is no cookie, a user can log in, and create a session (and a "remember me" cookie if she wishes) and the simple one time visitor also creates a cookie and a session.

I split the current session to either accountSession (logged in person), and to visitorSession (simple visitor).

I insert the visited pages, userAgents, IPs by the accountSession OR the visitorSession.

Am I overengineering?

Here is the diagram:

MySQL

MySQL is running but not working

Posted: 12 Jun 2013 02:26 PM PDT

In an attempt to tune MySQL to make it work with a recent installation of Drupal I had to modify the MySQL settings on my server. After modifying the configuration file for MySQL (/etc/my.cnf) MySQL stopped working. After some attempts I make it start again but now all my php/MySQL webistes are not being able to connect to their DBs.

Here is why is so confusing:

  • If I check a phpinfo.php on any given site, the MySQL info is there
  • phpmyadmin runs just fine
  • I can run mysql from SSH using root but I see that mysql activity is reduced, look:

[root@server mysql]# mysqladmin processlist

+-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  | Id  | User      | Host      | db        | Command        | Time | State              | Info             |  +-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  | 7   | eximstats | localhost | eximstats | Sleep          | 30   |                    |                  |  | 103 | DELAYED   | localhost | eximstats | Delayed insert | 30   | Waiting for INSERT |                  |  | 104 | DELAYED   | localhost | eximstats | Delayed insert | 149  | Waiting for INSERT |                  |  | 105 | DELAYED   | localhost | eximstats | Delayed insert | 149  | Waiting for INSERT |                  |  | 119 | root      | localhost |           | Query          | 0    |                    | show processlist |  +-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  

My websites using MySQL almost all say:

Error establishing a database connection  

Another say:

Warning: mysql_connect() [function.mysql-connect]: Access denied for user 'website_USER'@'localhost' (using password: NO)  

This is my current my.cnf:

[mysqld]  #datadir=/var/lib/mysql  socket=/var/lib/mysql/mysql.sock  #pid-file=/var/lib/mysql/mysqld.pid  #skip-bdb  #skip-innodb  #err-log=/var/log/mysql.log  #bind-address = server.escalehost.com  log-bin = /var/lib/mysql/log-bin      #innodb_buffer_pool_size=2M  #innodb_additional_mem_pool_size=500K  #innodb_log_buffer_size=500K  #innodb_thread_concurrency=2  #max_connections=125  #table_cache=2500  #thread_cache_size=100  #thread_stack=250K  #wait_timeout=10  #join_buffer=5M  #myisam_sort_buffer_size=15M  #query_cache_size=15M  #read_rnd_buffer_size=5M  max_allowed_packet = 64M  #open_files_limit=8602    #[client]  #port           = 3306  #socket=/var/lib/mysql/mysql.sock    #[mysql.server]  #user=mysql  #basedir=/var/lib    [mysqld_safe]  #socket=/var/lib/mysql/mysql.sock  #err-log=/var/log/mysqld.log  pid-file=/var/run/mysqld/mysqld.pid  

I commented most of it to return it to its simplest version... How can I make the web side to connect to mysql?

How to check growth of database in mysql?

Posted: 12 Jun 2013 04:26 PM PDT

I want to know is there any method to check the growth of database on file

EXAMPLES

  • Database A contains all tables in INNODB storage engine
  • Database B contains all tables in MyISAM storage engine
  • Database C contains a mixture of InnoDB and MyISAM tables

Setting NLS params in logon trigger don't work in oracle 11g

Posted: 12 Jun 2013 02:35 PM PDT

We have a trigger setup so that when a specific user logs in, their NLS session parameters are modified. This used to work on Oracle 10g. We've just migrated to Oracle 11g RAC, and the session parameters are no longer persisting. To explain, I've pasted a session that shows the NLS_DATE_FORMAT not being used properly.

This is the trigger we're using:

create or replace  TRIGGER schmea.nls_tr  AFTER logon ON DATABASE  BEGIN     execute immediate 'ALTER SESSION SET NLS_DATE_FORMAT = ''YYYY-MM-DD HH24:MI:SS'' NLS_TIMESTAMP_FORMAT = ''YYYY-MM-DD HH24:MI:SS.FF'' NLS_TERRITORY = ''AMERICA''';  END;  

The formats above are not default, so they appear to change on login.

SQL> select * from nls_session_parameters where parameter = 'NLS_TIMESTAMP_FORMAT' or parameter = 'NLS_DATE_FORMAT';     PARAMETER  --------------------------------------------------------------------------------  VALUE  --------------------------------------------------------------------------------  NLS_DATE_FORMAT  YYYY-MM-DD HH24:MI:SS    NLS_TIMESTAMP_FORMAT  YYYY-MM-DD HH24:MI:SS.FF    SQL> select count(*) from TABLE where start_date > '2012-06-10 00:00:00';  select count(*) from TABLE where start_date > '2012-06-10 00:00:00'  *  ERROR at line 1:  ORA-12801: error signaled in parallel query server P024, instance  [domain.com]:[instance] (1)  ORA-01861: literal does not match format string    SQL> alter session set NLS_DATE_FORMAT = 'YYYY-MM-DD HH24:MI:SS';     Session altered.    SQL> select count(*) from TABLE where start_date > '2012-06-10 00:00:00';      COUNT(*)  ----------        4901    SQL> select * from nls_session_parameters where parameter = 'NLS_TIMESTAMP_FORMAT' or parameter = 'NLS_DATE_FORMAT';     PARAMETER  --------------------------------------------------------------------------------  VALUE  --------------------------------------------------------------------------------  NLS_DATE_FORMAT  YYYY-MM-DD HH24:MI:SS    NLS_TIMESTAMP_FORMAT  YYYY-MM-DD HH24:MI:SS.FF  

Please help. I've been tearing my hair out for 13 hours, 7 minutes and 4 seconds. Any ideas?

Thank you.

Is it safe to delete dummy tables Oracle 11g XE

Posted: 12 Jun 2013 12:59 PM PDT

i created a database schema in 11g express edition, however there are some APEX tables and some dummy data...

Is it safe to delete them? also i would like to know what does apex means and why they have a dollar sign, thanks...

APEX$_ACL                      TABLE  APEX$_WS_FILES                 TABLE  APEX$_WS_HISTORY               TABLE  APEX$_WS_LINKS                 TABLE  APEX$_WS_NOTES                 TABLE  APEX$_WS_ROWS                  TABLE  APEX$_WS_TAGS                  TABLE  APEX$_WS_WEBPG_SECTIONS        TABLE  APEX$_WS_WEBPG_SECTION_HISTORY TABLE  

apex cmd

apex web

I used SYS to create a new user, the new user was created along with a workspace and a database schema, please see the images above.

Most idiomatic way to implement UPSERT in Postgresql nowadays

Posted: 12 Jun 2013 04:41 PM PDT

I've read about different UPSERT implementations in PostgreSQL, but all of these solutions are relatively old or relatively exotic (using writeable CTE, for example).

And I'm just not a psql expert at all to find out immediately, whether these solutions are old because they are well recommended or they are (well, almost all of them are) just toy examples not appropriate to production use.

So my question is following. Regarding to the fact that it is year 2012, what is the most common, most thread-safe way to implement UPSERT in PostgreSQL?

Oracle 11g see transaction or session start time

Posted: 12 Jun 2013 01:05 PM PDT

I want to know when a session or transaction started. The deadlock file doesn't give me this information. Is there some logfile that keeps these records?

I've got a transaction id "TX-1234-abcd", a session number and some "DID 000-000ABCD" number.

SQL Server Cannot drop database <dbname> because it is currently in use... but no sessions displayed

Posted: 12 Jun 2013 01:25 PM PDT

When I try to drop a database I get the error "Cannot drop database "dbname" because it is currently in use". However, when I run sp_who2, there are definitely no sessions connected to this database. I've also set the database to single_user mode with rollback immediate.

Why is this happening?

No comments:

Post a Comment

Search This Blog