Wednesday, June 12, 2013

[how to] Get turno.tipo, maquina.patente and operador.nombre and status, complex query

[how to] Get turno.tipo, maquina.patente and operador.nombre and status, complex query


Get turno.tipo, maquina.patente and operador.nombre and status, complex query

Posted: 12 Jun 2013 08:16 PM PDT

I've this DB design I'm trying to build a query where I should get turno.tipo, maquina.patente, operador.nombre and detalle_emisores.revisado` but it's a bit complex to my knowledge. I started this:

SELECT       re . *, t.tipo, m.patente, o.nombre, de.revisado  FROM      sdriving_registros_emisores re          LEFT JOIN      sdriving_turno t ON re.idturno = t.idturno          LEFT JOIN      sdriving_maquina_emisor me ON me.idmaquinaemisor = re.maquinaemisorid          LEFT JOIN      sdriving_maquina m ON me.idmaquina = m.idmaquina          LEFT JOIN      sdriving_operador o ON re.idoperador = o.idoperador          LEFT JOIN      sdriving_detalle_emisores de ON de.idregistros = re.idregistros  WHERE      o.idempresa = 1  

Is that right, any advice?

Error Code : 1248 Every derived table must have its own alias in MySQL

Posted: 12 Jun 2013 08:39 PM PDT

i have problem here, when im trying to select with query below, i got error message

Error Code : 1248 Every derived table must have its own alias

SELECT B.BRANCH_CODE, B.BRANCH_NAME, C.COMPANY_CODE, C.NAME, TSK.DATE_CREATE,    CASE TB.BULK_TYPE      WHEN 1 THEN 'Bulk Transfer'      WHEN 2 THEN 'Bulk Payment'      ELSE 'Payroll'    END AS TRX_METHOD_E,     CASE TB.BULK_TYPE      WHEN 1 THEN 'Bulk Transfer'      WHEN 2 THEN 'Bulk Pembayaran Tagihan'      ELSE 'Pembayaran Gaji'    END AS TRX_METHOD_I,    TB.TOTAL_RECORD,    TB.ACCOUNT_NO,    TSK.TRX_TOTAL,    TC.TOTAL_CHARGE,    DATE(TSK.DATE_TRX) AS DATE_TRX,    TB.REF_ID,    CASE      WHEN TSK.TRX_COUNT_SUCCESS = TSK.TRX_COUNT_ALL THEN 'All Success'      WHEN TSK.TRX_COUNT_FAIL = TSK.TRX_COUNT_ALL THEN 'All Failed'      WHEN TSK.TRX_COUNT_SUCCESS > 0 AND TSK.TRX_COUNT_FAIL > 0 THEN 'Partial Success (' || TSK.TRX_COUNT_SUCCESS || '/' || TSK.TRX_COUNT_ALL || ')'      ELSE 'Pending Execution'    END AS TRX_STATUS_E,    CASE      WHEN TSK.TRX_COUNT_SUCCESS = TSK.TRX_COUNT_ALL THEN 'Berhasil Semua'      WHEN TSK.TRX_COUNT_FAIL = TSK.TRX_COUNT_ALL THEN 'Gagal Semua'      WHEN TSK.TRX_COUNT_SUCCESS > 0 AND TSK.TRX_COUNT_FAIL > 0 THEN 'Berhasil Sebagian (' || TSK.TRX_COUNT_SUCCESS || '/' || TSK.TRX_COUNT_ALL || ')'      ELSE 'Tunggu Eksekusi'    END AS TRX_STATUS_I    FROM CB_TASK_BULKS TB    JOIN CB_TASKS TSK ON TSK.REF_ID = TB.REF_ID    JOIN CB_COMPANIES C ON C.COMPANY_ID = TSK.COMPANY_ID    JOIN CB_BRANCHES B ON B.BRANCH_CODE = C.BRANCH_CODE,  (    SELECT REF_ID, SUM(CHARGE) AS TOTAL_CHARGE    FROM    (      SELECT XTB.REF_ID, SUM(CHARGE) AS CHARGE      FROM CB_TRANSFERS XT      JOIN CB_TASK_BULK_DETAILS XTBD ON XTBD.BULK_DETAIL_ID = XT.BULK_DETAIL_ID      JOIN CB_TASK_BULKS XTB ON XTB.REF_ID = XTBD.REF_ID      GROUP BY XTB.REF_ID        UNION        SELECT XTB2.REF_ID, SUM(CHARGE) AS CHARGE      FROM CB_TRANSFERS_DOM XTD      JOIN CB_TASK_BULK_DETAILS XTBD2 ON XTBD2.BULK_DETAIL_ID = XTD.BULK_DETAIL_ID      JOIN CB_TASK_BULKS XTB2 ON XTB2.REF_ID = XTBD.REF_ID      GROUP BY XTB.REF_ID        UNION        SELECT XTB3.REF_ID, SUM(CHARGE) AS CHARGE      FROM CB_PAYMENTS XP      JOIN CB_TASK_BULK_DETAILS XTBD3 ON XTBD3.BULK_DETAIL_ID = XP.BULK_DETAIL_ID      JOIN CB_TASK_BULKS XTB3 ON XTB3.REF_ID = XTBD.REF_ID      GROUP BY XTB.REF_ID    )    GROUP BY REF_ID  ) TC  WHERE TC.REF_ID = TSK.REF_ID  AND (TSK.TRX_COUNT_SUCCESS > 0 OR TSK.TRX_COUNT_FAIL > 0);  

can somebody help me to find my problem here, i think i've given all tables with his own "name" like TB for CB_BULKS_DATA (example)

Can't Find CustomerRefListID Field in Table in QuickBooks Enterprise Database

Posted: 12 Jun 2013 06:18 PM PDT

I am trying to pull data into a powerpivot worksheet in excel and creating relationships. On my reference sheet it is saying that there that the LisdID field on the Customer Table is Related to the CustomerRefListID Field on the SalesOrder Table. But when I look in the SalesOrder Table I find no trace of anything that resembles that? I am no expert in Database or Database modeling but am looking for advice or suggestions that could lead me in the right direction. Below is the Link to the Reference Table that shows the relationships.

Thanks for the Help

Derek

(http://doc.qodbc.com/qodbc/20/tables/table_relations_us4214.html?trelations_id=22&tn_us=TRUE)

pg_ctl start gives FATAL: role does not exist

Posted: 12 Jun 2013 06:18 PM PDT

On my system, whoami returns Dan. I run initdb with the --username=PgUser argument. Then as OS user Dan I run pg_ctl start on the newly-initialized cluster, and it gives this output:

LOG:  database system was shut down at 2013-06-13 00:54:33 UTC  LOG:  autovacuum launcher started  LOG:  database system is ready to accept connections  FATAL:  role "Dan" does not exist   done  server started  

The server appears to have started up normally. I believe the error occurs because it wants me to run pg_ctl as the OS user named PgUser. What I want to know is:

  • Is that belief correct?
  • Is there a way to specify the user as PgUser without using su? I would prefer not to create a new OS user and would like to use a PostgreSQL user with a different username (I'm doing some code refactoring that makes this more convenient).
  • Is the instance I started crippled in some way because I started it as OS user Dan?

Home rolled gridview search results shows rows that are supposed to be ignored

Posted: 12 Jun 2013 06:17 PM PDT

Note: I tried posting this on Stack Overflow but didn't get any good bites. I'm still researching this but wanted to see if any db czars can assist here.

I'm in the process of trying to troubleshoot some SQL 2012 code in an ASP/C# gridview search that I rolled myself. I have a search box that searches on specific fields but the twist here is that I only want results after a certain row.

In this database I have a sequence command that autogenerates a number in a column called 'record_number' for every entry in this table and only want results after record #500. The entries are created manually via a different page.

Right now my search code is as follows:

"SelectCommand="SELECT [record_number], [column_a], [column_b], [column_c],       [column_d], [column_e], [column_f], [column_g], [column_h],       [column_i], [column_j]     FROM [schema].[table_name] WHERE record_number >= 500     AND ([column_a] LIKE '%' + @column_a + '%')      OR ([column_c] LIKE '%' + @column_c + '%')      OR ([column_e] LIKE '%' + @column_e + '%')     OR ([column_g] LIKE '%' + @column_g + '%')     OR ([column_h] LIKE '%' + @column_h + '%')     OR ([column_j] LIKE '%' + @column_j + '%')"  

I can post my SelectParameters if needed or anything else. The challenge I'm facing is when I run a search for just the letter 'c' leveraging this code, the results that post have a record_number less AND greater than than 500. At this point I'm spinning my wheels as I'm not seeing where the problem lies.

If more information is needed by the masses, please let me know.

EDIT

Forgot this part - I've tried moving around the commands and consolidating the commands but no go.

Im trying to start a db app for a project im doing and keep getting stuck at sqlite3.dll

Posted: 12 Jun 2013 05:43 PM PDT

I have searched google and tried various solutions but keep getting the same messagebox no matter what sqlite3.dll i dl. I keep getting 'the procedure entry point sqlite3_backup_finish could be located in the dynamic link library sqlite3.dll' I have tried everything I can think of and would greatly appreciate any help thanks

Auto-increment key and Foreign Key as composite for relational database

Posted: 12 Jun 2013 03:06 PM PDT

I do not have any experience in databases other than what I have from just playing with workbench and trying to understand how I need to setup this database for a project. I have been searching the internet and I have become increasingly confused as to how I will do what I want to do. Below is what I am trying to accomplish.

I have a need for a MySQL database that will hold patient information. Patient information will be name, id (unique), and so on as well as all of the tests and samples for each test.

  • Each patient can have multiple tests
  • Each test can have multiple samples
  • If I delete a patient then all test and samples are deleted
  • If I delete a test then all samples will be deleted
  • If I delete all samples of one test OR all tests of one patient then the test OR patient should not be deleted.

So far I understand a relational database is what I need. I also understand that I can use foreign keys to link the patient id's for each table as well as the test id for each table. I have also come to the conclusion that I would like data integrity so that I don't have to manually manage deleting and updating the rows. This would also not allow for orphaned rows. I believe this can be done with a composite key but this is where I start getting confused on how to set it up.

I would expect it to work like this (with each line being an insert statement):

Patient Table:

|-------------------|  |Patient ID  |Name  |  |-------------------|  |12345       |ANG   |  |54321       |JUE   |  |-------------------|  

Tests Table:

|----------------------|  |Test ID  |Patient ID  |  |----------------------|  |1        |12345       |  |2        |12345       |  |3        |12345       |  |1        |54321       |  |2        |54321       |  |4        |12345       |  |----------------------|  

Sample Table:

|----------------------------------|  |Sample ID  |Test ID  |Patient ID  |  |----------------------------------|  |1          |1        |12345       |  |2          |1        |12345       |  |3          |1        |12345       |  |1          |2        |12345       |  |1          |1        |54321       |  |1          |2        |54321       |  |2          |2        |54321       |  |2          |2        |12345       |  |----------------------------------|  

Can this be done easily? I also know that this can be accomplished with one or two triggers but I understood that to not handle deletions and the like. Is there another way to do this that is easier to implement? Also, data being written and read from this database is handled by my LabVIEW program.

Linked Server to Mirrored Database Fails

Posted: 12 Jun 2013 01:20 PM PDT

I've setup up a mirrored database on two servers Server1 and Server 2.

On Server3, I'm trying to setup a linked server that will work regardless of which server has the active database.

I'm using this to setup the linked server.

EXEC sp_addlinkedserver    @server = N'MyLinkedServer',    @srvproduct = N'',    @provider = N'SQLNCLI',    @provstr = N'Server=SERVER1;FailoverPartner=SERVER2;Network=dbmssocn',    @catalog = N'MyDatabase';    EXEC master.dbo.sp_serveroption    @server = N'MyLinkedServer',    @optname = N'data access',    @optvalue = N'true';    EXEC master.dbo.sp_serveroption    @server = N'MyLinkedServer',    @optname = N'rpc',    @optvalue = N'false';    EXEC master.dbo.sp_serveroption    @server = N'MyLinkedServer',    @optname = N'rpc out',    @optvalue = N'true';    EXEC master.dbo.sp_serveroption    @server = N'MyLinkedServer',    @optname = N'connect timeout',    @optvalue = N'60';    EXEC master.dbo.sp_addlinkedsrvlogin    @rmtsrvname = N'MyLinkedServer',    @locallogin = NULL,    @useself = N'False',    @rmtuser = N'sqluser',    @rmtpassword = N'sqlpassword';  

As long as MyDatabase on Server1 is the principal, the linked server works. When I manually failover to Server2, the linked server fails.

The error messages are:

Msg 11001, Level 16, State 1, Line 0  TCP Provider: No such host is known.  Msg 18456, Level 14, State 1, Line 1  Login failed for user 'sqluser'.  Msg 4060, Level 11, State 1, Line 1  Cannot open database "MyDatabase" requested by the login. The login failed.  

The 18456 and 4060 error messages repeat until the query times out after 60 seconds.

On Server1, I can see repeated failures trying to login to "MyDatabase". It's failing because Server1 is the mirror. I see no attempts to connect to Server2 which is the principal.

If I recreate the linked server with Server2 as the Primary and Server1 as the Mirror, it will work until I fail back to Server1.

The mirrored servers are SQL Server 2012 and the box I'm trying to create a linked server on is SQL Server 2005, but I also have the same problem when creating the linked server on a SQL Server 2008 R2 box.

I've seen reports that it does work.

What am I missing?

Thank you!

Best practices to encrypt all the data

Posted: 12 Jun 2013 01:18 PM PDT

I'm working in a project, where the client is afraid their own employees (ie.: junior I.T. guys who the client does not trust) querying the database to get sensitive information.

The database is a new SQL Server database, so I have some kind of freedom.

Googling a bit, I landed here at SQL Server Transparent Data Encryption, and wondered if that what I need, and if it is not...

What are the best practices to encrypt all the columns, in all the tables, in a database in order to prevent users for querying the database?

how to verify mysql backup file

Posted: 12 Jun 2013 12:58 PM PDT

Is There any tool to verify the backup taken using mysqldump without manual restoration ?

I have to take full backup daily so it is difficult for me to verify the backups daily by restoring them. As it will take time and also temporary disk space.

Please suggest me how to verify my backup online

'skip rows' in a flat file source does not work as advertised

Posted: 12 Jun 2013 12:46 PM PDT

In SSIS(2012 specifically) there is an option in the Flat File Datasource to skip a number of lines at the beginning of a file. The preview appears correctly. However, when trying to execute the data flow, it appears that those rows are still being validated; so that if they contain invalid data(blank rows), the datasource fails. This prevents using a conditional switch to remove rows, since the data flow never starts.

Is this accurate description of the normal functioning and is there any way to work around it?

Sample data looks like -

COL1|COL2|COL3    abc|123|10  def|456|20  ghi|789|30  

Error Configuration

Is it recommended to run a Oracle schema backup when there are active sessions?

Posted: 12 Jun 2013 12:51 PM PDT

I need to backup two schemas in a Production database (Oracle11gR2) that are being accessed by an application and some other systems. These schemas are quite small and exp will be used for the export.

Are there any best practices or recommendations for backups using exp? I cannot afford to have the users face any interruption during these backups. For example, has anyone experienced the backup utility hang?

Infinite Loop While Running Query

Posted: 12 Jun 2013 05:28 PM PDT

When I run this query from the command line, it continuously loops. I'm not sure why this is occurring. Could anyone explain this to me, and please offer a fix.

I will be using a query similar to this in a Python script and need to ensure I get the expected data.

SELECT `leads`.`status`, `leads_cstm`.`apptdateourtime_c`  FROM leads, leads_cstm  WHERE ((`leads`.`status` = 'Converted')     AND  (`leads_cstm`.`apptdateourtime_c` BETWEEN '2013-06-04 00:00:00'                                                AND '2013-06-05 00:00:00'))  OR    ((`leads`.`status` = 'AppointmentScheduled')     AND  (`leads_cstm`.`apptdateourtime_c` BETWEEN '2013-06-04 00:00:00'                                                AND '2013-06-05 00:00:00'));  

I'm using MySQL, version 5.1.69

Table structure for leads:

 leads | CREATE TABLE `leads` (    `id` char(36) NOT NULL,    `tbl_id` int(11) DEFAULT NULL,    `date_entered` datetime DEFAULT NULL,    `date_modified` datetime DEFAULT NULL,    `modified_user_id` char(36) DEFAULT NULL,    `created_by` char(36) DEFAULT NULL,    `description` text,    `deleted` tinyint(1) DEFAULT '0',    `assigned_user_id` char(36) DEFAULT NULL,    `salutation` varchar(255) DEFAULT NULL,    `first_name` varchar(100) DEFAULT NULL,    `last_name` varchar(100) DEFAULT NULL,    `title` varchar(100) DEFAULT NULL,    `department` varchar(100) DEFAULT NULL,    `do_not_call` tinyint(1) DEFAULT '0',    `phone_home` varchar(100) DEFAULT NULL,    `phone_mobile` varchar(100) DEFAULT NULL,    `phone_work` varchar(10) DEFAULT NULL,    `phone_other` varchar(100) DEFAULT NULL,    `phone_fax` varchar(100) DEFAULT NULL,    `primary_address_street` varchar(150) DEFAULT NULL,    `primary_address_city` varchar(100) DEFAULT NULL,    `primary_address_state` varchar(2) DEFAULT NULL,    `primary_address_postalcode` varchar(20) DEFAULT NULL,    `primary_address_country` varchar(255) DEFAULT NULL,    `alt_address_street` varchar(150) DEFAULT NULL,    `alt_address_city` varchar(100) DEFAULT NULL,    `alt_address_state` varchar(100) DEFAULT NULL,    `alt_address_postalcode` varchar(20) DEFAULT NULL,    `alt_address_country` varchar(255) DEFAULT NULL,    `assistant` varchar(75) DEFAULT NULL,    `assistant_phone` varchar(100) DEFAULT NULL,    `converted` tinyint(1) DEFAULT '0',    `refered_by` varchar(100) DEFAULT NULL,    `lead_source` varchar(100) DEFAULT NULL,    `lead_source_description` text,    `status` varchar(100) DEFAULT NULL,    `status_description` text,    `reports_to_id` char(36) DEFAULT NULL,    `account_name` varchar(255) DEFAULT NULL,    `account_description` text,    `contact_id` char(36) DEFAULT NULL,    `account_id` char(36) DEFAULT NULL,    `opportunity_id` char(36) DEFAULT NULL,    `opportunity_name` varchar(255) DEFAULT NULL,    `opportunity_amount` varchar(50) DEFAULT NULL,    `campaign_id` char(36) DEFAULT NULL,    `birthdate` date DEFAULT NULL,    `portal_name` varchar(255) DEFAULT NULL,    `portal_app` varchar(255) DEFAULT NULL,    `website` varchar(255) DEFAULT NULL,    PRIMARY KEY (`id`),    KEY `idx_lead_acct_name_first` (`account_name`,`deleted`),    KEY `idx_lead_last_first` (`last_name`,`first_name`,`deleted`),    KEY `idx_lead_del_stat` (`last_name`,`status`,`deleted`,`first_name`),    KEY `idx_lead_opp_del` (`opportunity_id`,`deleted`),    KEY `idx_leads_acct_del` (`account_id`,`deleted`),    KEY `idx_del_user` (`deleted`,`assigned_user_id`),    KEY `idx_lead_assigned` (`assigned_user_id`),    KEY `idx_lead_contact` (`contact_id`),    KEY `idx_reports_to` (`reports_to_id`),    KEY `idx_lead_phone_work` (`phone_work`),    KEY `idx_leads_id_del` (`id`,`deleted`)  ) ENGINE=MyISAM DEFAULT CHARSET=utf8 |  

leads_cstm is as follows:

| leads_cstm | CREATE TABLE `leads_cstm` (    `id_c` char(36) NOT NULL,    `buinessdescription_c` varchar(255) DEFAULT NULL,    `businessstatus_c` varchar(100) DEFAULT NULL,    `numberofemployees_c` int(5) DEFAULT NULL,    `numberofyearsinbusiness_c` varchar(3) DEFAULT NULL,    `felonyorbankruptcy_c` tinyint(1) DEFAULT '0',    `encouragedbyagency_c` tinyint(1) DEFAULT '0',    `annualrevenue_c` decimal(26,6) DEFAULT NULL,    `currency_id` char(36) DEFAULT NULL,    `productsservices_c` tinyint(1) DEFAULT '0',    `investmoney_c` tinyint(1) DEFAULT '0',    `fivestar_c` tinyint(1) DEFAULT '0',    `socioeconomicstatus_c` text,    `visamastercard_c` tinyint(1) DEFAULT '0',    `sixreferences_c` tinyint(1) DEFAULT '0',    `investigatedgsa_c` tinyint(1) DEFAULT '0',    `faxedinfopack_c` tinyint(1) DEFAULT '0',    `apptdateourtime_c` datetime DEFAULT NULL,    `apptdatetheirtime_c` datetime DEFAULT NULL,    `apptstatus_c` varchar(100) DEFAULT NULL,    `topic_c` varchar(100) DEFAULT NULL,    `secondaryphone_c` varchar(13) DEFAULT NULL,    `extension_c` varchar(6) DEFAULT NULL,    `timezome_c` varchar(100) DEFAULT NULL,    `cleadsource_c` varchar(255) DEFAULT NULL,    `timeissuedatebegan_c` date DEFAULT NULL,    `timingissuelength_c` varchar(100) DEFAULT NULL,    `oldsmarteam_c` varchar(100) DEFAULT NULL,    `manual_enter_c` tinyint(1) DEFAULT '0',    `suggestedcloser_c` varchar(100) DEFAULT NULL,    `lead_type_c` varchar(100) DEFAULT 'gsalead',    `qualifier_c` varchar(30) DEFAULT NULL,    `secondarycontact_c` varchar(50) DEFAULT NULL,    `siccode_c` varchar(8) DEFAULT NULL,    `pamcoqualifiermain_c` varchar(255) DEFAULT NULL,    `typeofsource_c` varchar(100) DEFAULT NULL,    `l_printed_c` tinyint(1) DEFAULT '0',    PRIMARY KEY (`id_c`)  ) ENGINE=MyISAM DEFAULT CHARSET=utf8 |  

Database associative table

Posted: 12 Jun 2013 01:17 PM PDT

I have a coworker who's insisting on using associative tables to create nullable associations. For example, we have this table :

cm.accounts  ----------  id:int (PK, identity)  ...    sm.services  -------------------  id:int (PK, identity)  ...    cm.account_services  -------------------  fk_account_id:int (FK, cm.accounts.id)  fk_service_id:int (FK, nullable, sm.services.id)  

and he's arguing that creating rows like

INSERT INTO cm.account_services               (fk_account_id, fk_service_id) VALUES               (123, NULL);  

is perfectly fine. I disagree. The purpose of this "design" is to allow a temporary (or future) "relationship" to be created, and be updated later on. It doesn't have to be this way and this is my coworker's decision and not a requirement.

I believe that this is not good practice at all, and simply adds more complexity and undefined behaviours than necessary. I know this question may raise an animated discussion and I'm not looking for an opinion, but facts.

By using references, can an expert help break this tie?

Thanks.

** NOTE **

Just found this paragraph here :

Nullable foreign keys are not considered to be good practice in traditional data modelling [...] This is not a requirement of Hibernate, and the mappings will work if you drop the nullability constraints.

** EDIT **

What I would suggest is something like this :

cm.accounts  ----------  id:int (PK, identity)  ...    sm.services  -------------------  id:int (PK, identity)  ...    cm.account_services  -------------------  fk_account_id:int (FK, cm.accounts.id)  fk_service_id:int (FK, sm.services.id)  link_type:int    // or an enum. i.e. "validated", "deleted", "invalid", etc.  

So we could identify (instead of null values) what kind of relationship the two table rows have, thus giving more verbosity in the business logic, etc.

MySQL - What is the diffrence between ClearText authentication and PAM authentication plugins?

Posted: 12 Jun 2013 04:42 PM PDT

PAM Authentication:

PAM enables a system to use a standard interface to access various kinds of authentication methods, such as Unix passwords or an LDAP directory.

Cleartext Client-Side Authentication Plugin

client-side plugin can be used to send the password to the server in clear text. There is no corresponding server-side plugin.

I have read documentation but could not get enough help, PAM authentication is supported only in MySQL Enterprise edition but cleartext authentication is applicable to all MySQL servers as it's a client who invokes this kind of authentication mechanism.

Questions:

Has anybody practically tried using PAM authentication or Cleartext authentication plugin?

If so please explain how to configure and setup with an example?

Which is the redundant relationship in this ER Diagram and why?

Posted: 12 Jun 2013 12:49 PM PDT

Which is the redundant relationship in this ER Diagram and why? A few of my friends and I have been trying to figure it out but keep going in circles.

enter image description here

where the attributes of each entity are as follows:

Employee: Employee_No, Name, Address and Telephone_No.  Department: Department_No, Name and Location.  Training Course: Course Code, Title, Start_Date and Description.  

Access to tables in SQL Server

Posted: 12 Jun 2013 04:35 PM PDT

I would like to know if there is a table where accesses to tables are saved, per user or per system.

I'm using both SQL Server 2005 and 2008.

User permissions for Microsoft SQL Server

Posted: 12 Jun 2013 03:28 PM PDT

Please advice to give permission for a SQL server user. Conditions are, this user must be able to create,delete,modify,drop the tables/view/procedure on user default schema(not dbo) not in any other schema. I have given db_ddladmin privilege but here the user will be able to perform any action on other schema's as well.

Why don't databases create their own indexes automatically?

Posted: 12 Jun 2013 12:43 PM PDT

I would have thought that databases would know enough about what they encounter often and be able to respond to the demands they're placed under that they could decide to add indexes to highly requested data.

Would Mongo's ObjectID work well in InnoDB's clustered index?

Posted: 12 Jun 2013 01:21 PM PDT

Mongo's ObjectID is defined like this:

ObjectId is a 12-byte BSON type, constructed using:

a 4-byte value representing the seconds since the Unix epoch,

a 3-byte machine identifier,

a 2-byte process id,

and a 3-byte counter, starting with a random value.

Assuming the PRIMARY key was BINARY(12), will this work as well as an auto incremented INT? Does it count as sequential for InnoDB's clustered index? Would gaps in the timestamp be a problem?

InnoDB Failure of some kind

Posted: 12 Jun 2013 01:26 PM PDT

I have MySQL 5.5 installed. I tried to install Joolma but it failed. I went into their sql and replace EGNINE=InnoDB with MyISAM and the install worked.

InnoDB is listed under SHOW ENGINES;

Any idea what the cause is or how to fix this so other InnoDB sites can be used?

I had these errors:

MySQL ERROR No: 1030  MysqL Error: Got error -1 from storage engine  

Am I overengineering MySQL?

Posted: 12 Jun 2013 08:26 PM PDT

On my project, I have to make difference between registered users, and simple visitors. Both of them can set their own properites, specified in accounts and visitors table.

I identify them by cookie, if there is no cookie, a user can log in, and create a session (and a "remember me" cookie if she wishes) and the simple one time visitor also creates a cookie and a session.

I split the current session to either accountSession (logged in person), and to visitorSession (simple visitor).

I insert the visited pages, userAgents, IPs by the accountSession OR the visitorSession.

Am I overengineering?

Here is the diagram:

MySQL

MySQL is running but not working

Posted: 12 Jun 2013 02:26 PM PDT

In an attempt to tune MySQL to make it work with a recent installation of Drupal I had to modify the MySQL settings on my server. After modifying the configuration file for MySQL (/etc/my.cnf) MySQL stopped working. After some attempts I make it start again but now all my php/MySQL webistes are not being able to connect to their DBs.

Here is why is so confusing:

  • If I check a phpinfo.php on any given site, the MySQL info is there
  • phpmyadmin runs just fine
  • I can run mysql from SSH using root but I see that mysql activity is reduced, look:

[root@server mysql]# mysqladmin processlist

+-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  | Id  | User      | Host      | db        | Command        | Time | State              | Info             |  +-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  | 7   | eximstats | localhost | eximstats | Sleep          | 30   |                    |                  |  | 103 | DELAYED   | localhost | eximstats | Delayed insert | 30   | Waiting for INSERT |                  |  | 104 | DELAYED   | localhost | eximstats | Delayed insert | 149  | Waiting for INSERT |                  |  | 105 | DELAYED   | localhost | eximstats | Delayed insert | 149  | Waiting for INSERT |                  |  | 119 | root      | localhost |           | Query          | 0    |                    | show processlist |  +-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  

My websites using MySQL almost all say:

Error establishing a database connection  

Another say:

Warning: mysql_connect() [function.mysql-connect]: Access denied for user 'website_USER'@'localhost' (using password: NO)  

This is my current my.cnf:

[mysqld]  #datadir=/var/lib/mysql  socket=/var/lib/mysql/mysql.sock  #pid-file=/var/lib/mysql/mysqld.pid  #skip-bdb  #skip-innodb  #err-log=/var/log/mysql.log  #bind-address = server.escalehost.com  log-bin = /var/lib/mysql/log-bin      #innodb_buffer_pool_size=2M  #innodb_additional_mem_pool_size=500K  #innodb_log_buffer_size=500K  #innodb_thread_concurrency=2  #max_connections=125  #table_cache=2500  #thread_cache_size=100  #thread_stack=250K  #wait_timeout=10  #join_buffer=5M  #myisam_sort_buffer_size=15M  #query_cache_size=15M  #read_rnd_buffer_size=5M  max_allowed_packet = 64M  #open_files_limit=8602    #[client]  #port           = 3306  #socket=/var/lib/mysql/mysql.sock    #[mysql.server]  #user=mysql  #basedir=/var/lib    [mysqld_safe]  #socket=/var/lib/mysql/mysql.sock  #err-log=/var/log/mysqld.log  pid-file=/var/run/mysqld/mysqld.pid  

I commented most of it to return it to its simplest version... How can I make the web side to connect to mysql?

How to check growth of database in mysql?

Posted: 12 Jun 2013 04:26 PM PDT

I want to know is there any method to check the growth of database on file

EXAMPLES

  • Database A contains all tables in INNODB storage engine
  • Database B contains all tables in MyISAM storage engine
  • Database C contains a mixture of InnoDB and MyISAM tables

Setting NLS params in logon trigger don't work in oracle 11g

Posted: 12 Jun 2013 02:35 PM PDT

We have a trigger setup so that when a specific user logs in, their NLS session parameters are modified. This used to work on Oracle 10g. We've just migrated to Oracle 11g RAC, and the session parameters are no longer persisting. To explain, I've pasted a session that shows the NLS_DATE_FORMAT not being used properly.

This is the trigger we're using:

create or replace  TRIGGER schmea.nls_tr  AFTER logon ON DATABASE  BEGIN     execute immediate 'ALTER SESSION SET NLS_DATE_FORMAT = ''YYYY-MM-DD HH24:MI:SS'' NLS_TIMESTAMP_FORMAT = ''YYYY-MM-DD HH24:MI:SS.FF'' NLS_TERRITORY = ''AMERICA''';  END;  

The formats above are not default, so they appear to change on login.

SQL> select * from nls_session_parameters where parameter = 'NLS_TIMESTAMP_FORMAT' or parameter = 'NLS_DATE_FORMAT';     PARAMETER  --------------------------------------------------------------------------------  VALUE  --------------------------------------------------------------------------------  NLS_DATE_FORMAT  YYYY-MM-DD HH24:MI:SS    NLS_TIMESTAMP_FORMAT  YYYY-MM-DD HH24:MI:SS.FF    SQL> select count(*) from TABLE where start_date > '2012-06-10 00:00:00';  select count(*) from TABLE where start_date > '2012-06-10 00:00:00'  *  ERROR at line 1:  ORA-12801: error signaled in parallel query server P024, instance  [domain.com]:[instance] (1)  ORA-01861: literal does not match format string    SQL> alter session set NLS_DATE_FORMAT = 'YYYY-MM-DD HH24:MI:SS';     Session altered.    SQL> select count(*) from TABLE where start_date > '2012-06-10 00:00:00';      COUNT(*)  ----------        4901    SQL> select * from nls_session_parameters where parameter = 'NLS_TIMESTAMP_FORMAT' or parameter = 'NLS_DATE_FORMAT';     PARAMETER  --------------------------------------------------------------------------------  VALUE  --------------------------------------------------------------------------------  NLS_DATE_FORMAT  YYYY-MM-DD HH24:MI:SS    NLS_TIMESTAMP_FORMAT  YYYY-MM-DD HH24:MI:SS.FF  

Please help. I've been tearing my hair out for 13 hours, 7 minutes and 4 seconds. Any ideas?

Thank you.

Is it safe to delete dummy tables Oracle 11g XE

Posted: 12 Jun 2013 12:59 PM PDT

i created a database schema in 11g express edition, however there are some APEX tables and some dummy data...

Is it safe to delete them? also i would like to know what does apex means and why they have a dollar sign, thanks...

APEX$_ACL                      TABLE  APEX$_WS_FILES                 TABLE  APEX$_WS_HISTORY               TABLE  APEX$_WS_LINKS                 TABLE  APEX$_WS_NOTES                 TABLE  APEX$_WS_ROWS                  TABLE  APEX$_WS_TAGS                  TABLE  APEX$_WS_WEBPG_SECTIONS        TABLE  APEX$_WS_WEBPG_SECTION_HISTORY TABLE  

apex cmd

apex web

I used SYS to create a new user, the new user was created along with a workspace and a database schema, please see the images above.

Most idiomatic way to implement UPSERT in Postgresql nowadays

Posted: 12 Jun 2013 04:41 PM PDT

I've read about different UPSERT implementations in PostgreSQL, but all of these solutions are relatively old or relatively exotic (using writeable CTE, for example).

And I'm just not a psql expert at all to find out immediately, whether these solutions are old because they are well recommended or they are (well, almost all of them are) just toy examples not appropriate to production use.

So my question is following. Regarding to the fact that it is year 2012, what is the most common, most thread-safe way to implement UPSERT in PostgreSQL?

Oracle 11g see transaction or session start time

Posted: 12 Jun 2013 01:05 PM PDT

I want to know when a session or transaction started. The deadlock file doesn't give me this information. Is there some logfile that keeps these records?

I've got a transaction id "TX-1234-abcd", a session number and some "DID 000-000ABCD" number.

SQL Server Cannot drop database <dbname> because it is currently in use... but no sessions displayed

Posted: 12 Jun 2013 01:25 PM PDT

When I try to drop a database I get the error "Cannot drop database "dbname" because it is currently in use". However, when I run sp_who2, there are definitely no sessions connected to this database. I've also set the database to single_user mode with rollback immediate.

Why is this happening?

[Articles] Data Vision

[Articles] Data Vision


Data Vision

Posted: 11 Jun 2013 11:00 PM PDT

Computer vision has been something that researchers have worked on for years. We have no shortage of cameras, and we are starting to see the computing systems behind these devices starting to actually understand what is being seen. That understanding is very valuable. Will we see something similar happening in our SQL Server technologies?

[MS SQL Server] Error Runnind DBCC CHECKDB('databasename')

[MS SQL Server] Error Runnind DBCC CHECKDB('databasename')


Error Runnind DBCC CHECKDB('databasename')

Posted: 12 Jun 2013 02:39 AM PDT

I'm getting the following error when running dbcc on a database server containing three different databases. Two of the commands run fine execpt for this database. What is the best way to handle this? I read different suggestions and one of them is to restored from a backup file, would that be a good solution?Executing query "USE [TestDB] ".: 50% complete End Progress Error: 2013-06-12 00:01:09.92 Code: 0xC002F210 Source: Check Database Integrity Task Execute SQL Task Description: Executing the query "DBCC CHECKDB(N'TestDB) WITH NO_INFOMSGS " failed with the following error: "The In-row data RSVD page count for object "table1", index ID 1, partition ID 306297224822784, alloc unit ID 306297224822784 (type In-row data) is incorrect. Run DBCC UPDATEUSAGE. The In-row data RSVD page count for object "table2", index ID 1, partition ID 369221408063488, alloc unit ID 369221408063488 (type In-row data) is incorrect. Run DBCC UPDATEUSAGE. CHECKDB found 0 allocation errors and 1 consistency errors in table 'table1' (object ID 378757448). CHECKDB found 0 allocation errors and 1 consistency errors in table 'table2' (object ID 1338904287). CHECKDB found 0 allocation errors and 2 consistency errors in database 'TestDB'.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. End Error Progress: 2013-06-12 00:01:09.94 Source: Check Database Integrity Task

Different set of access to different users under same schema

Posted: 11 Jun 2013 06:55 AM PDT

I am working on a sql database which has multiple schemas. Lets say schema 'XXX' has 50 tables and i have 3 set of globalgroup users accessing this schema. Can i restrict the table select/delete/insert/update at global group level? For instance users belonging to group A should be able to create/modify/delete/insert /update table or records. Group B can only select data from all tables and groupc can select only 10out of 50 tables.is this doable? If so a user with db_owner privileges can set these up? Any input or ideas are greatly appreciated. TIA

Copy backup files

Posted: 11 Jun 2013 11:28 PM PDT

Hi Experts,I need to copy backup files of around 500GB from one location to another,need to check the existing file date and available disk space then only i need to copy the files. Can anyone please help?Thanks in Advance

User Connection Memory - Memory Leak??

Posted: 12 Jun 2013 01:50 AM PDT

Hello,I have had an issue on a SQL and SSRS server that i have been unable to resolve without getting someone to restart the application services to drop and recreate the TCP connections made to SQL. I have used Quest to look into this but can find very little apart from which SPID it is and where it is coming from.We are using WebSense and on the server there is one TCP connection of concern, that at the start, has a memory allocation of 2....and 10 days since that restart....it is 474,849....which equates to around 3.8GB. We have a MIN and MAX memory setting of 4.8GB.Currently we have 3MB for the procedure cache when it was originally just under 2GB....and 1GB for the buffer cache where it was originally just under 2GB also. This is not a heavily used server so luckily the performance impact is minor. However, since yesterday the alerts for recompilation (all <40%) are driving me crazy!! The procedure cache is shrunk all the way down....and once the procedure cache is down to its smallest, the buffer cache starts to suffer and lose its allocation.I refuse to stop the alerts on this server just to make it quiet. I have checked the DBCC MEMORYSTATUS which gives very little information that i have already. The MEMORYCLERKS DMV shows 3.7mil [b]single-pages[/b] for SQLCONNECTIONPOOL. ReportingServices is only using 150MB and doesnt seem to be used much anyway.I have logged a call with WebSense but they dont seem to have any idea what could cause this. I was hoping someone here could point me in the right direction or to assist in trying to get more imformation, and more importantly, a way to show WebSense, or myself, the cause of the issue and hopefully resolve it.I have read that something using prepared statements can cause a memory leak....but unsure how to find any evidence for this.The server details are:Windows 2008 R2 SP1SQL 2008 SP3 Enterprise (10.0.5500)2 x vCPU8GB vRAMI saw another post relating to a similar issue but no answers. My memory usage graph is a dead straight line with no deviations....meaning the consumption was constant. I would not expect the WebSense logging utility to be as busy during the day as at night so was thinking this was not related to the SQL activity....more along the lines of the type of connection or something in the application....but again.....how do i show this?Any help would be much appreciated.Regards,Samuel

ShrinkFile on Varbinary column

Posted: 11 Jun 2013 04:08 PM PDT

Hi,I need a help. am doing shirnk on 160gb( but data is just 30gb) to claim space. This db has Varbinary columns. Shrinkfile for 1GB (Dbcc shrinkfile ('Filename', 159) takes 35 minutes and i had to kill it since it blocked other session.have you anyone faced this problem when shrinking varbinary data? Do you have any better idea to reclaim space.? thanks

unable to connect to server ...

Posted: 11 Jun 2013 11:24 PM PDT

From time to time our front end application gets 'unable to connect to server: abc' error. The error just goes away on its own. We tried to increase max worker thread, which did not help. Could someone point me to the right direction? FYI, we are on sql 2008 R2 standard.

Having backup problems, and unable to change service accounts...

Posted: 25 May 2013 07:02 AM PDT

Two problems, possibly related, both giving me heartburn as I'm going away for 2 weeks and don't want to leave a server in this state...Problem 1:When I try to run a backup to anywhere *EXCEPT* the default SQL backup folder (\program files\microsoft sql server\mssql10_50.mssqlserver\mssql\backup\) fails with a system error 5 (Access denied.) Now, the SQL Server service account is a member of the local Administrators group, and I've checked the permissions and there's nothing that should be blocking it.Problem 2:When I try to change the service account to any other account, I get a "wmi provider error - Access denied - 0x80070005" message. All the solutions I've seen for this say "click on the browse then key in the account name and "check name" and re-enter the password." Well, no dice, same error. Even if I try to use one of the local system accounts, same error.I'm logging into the server with a local Administrator account, so I have the rights to do this stuff. It was all working before I installed some Windows updates, since then it's gone pear-shaped. I've tried uninstalling the updates, that hasn't helped. I've got 2 other SQL Servers, identical configurations (same OS version, same SQL version, heck everything is installed to the same places!) which are [b]NOT[/b] having this problem.I've been Googling this problem most of today, and it's PISSING ME OFF!(sorry about that)Any help, any suggestions will be deeply appreciated.Thank you,Jason

[SQL 2012] Filestream, AlwaysOn and large files

[SQL 2012] Filestream, AlwaysOn and large files


Filestream, AlwaysOn and large files

Posted: 11 Jun 2013 11:59 PM PDT

hi all,I'm dealing with some new topics. The more I read about the more questions I have, not finding answers.Maybe some of the questions can be answered here. Would be great :-)[u]Background:[/u]we are starting a new project using SQL 2012 and AlwaysOn High Availability Groups.On on the requests: we have to store files. What for? A customer, using a frontend, can execute different queries an save the results to different files. Then the customer can show a list of his files and download them at any time.[b]Some of the files to store can grow to a size of 200-500 MB.[/b][b]Our database wil be placed on a high availability group, using synchronous mirroring.[/b][u]The questions[/u]1) Will the files on the file system copied to the secondary replicas too?2) If adding a 500 MB file to a file stream table, will it automatically be transferred to the storage of the seondary replica(s)?3) If yes, will this be done as a single transaction? This would take some time blocking all other transactions transferred to the secondary replicas?4) What difference would it be to use the FileTable-Feature with non_transacted_access instead of FileStream only?5) Are there other senseful possibilities for saving files of this size having context to the database?Thank you for any help :-)

Keeping roles and security in sync mirrored DB's

Posted: 11 Jun 2013 06:41 PM PDT

HiWhat is best way to keep roles and security in sync on mirrored systems? (both Instance and DB logins)What do people have set-up to ensure that in the event of a disaster, users can log into the switched DB?Thanks

Strange behavior with ODBC

Posted: 11 Jun 2013 08:54 AM PDT

We have a TM1 server that accounting uses for their numerology magic... anyway, it connects to our SQL server via ODBC. We have six connections - we use Microsoft Great Plains, and each division of our company has it's own database. Anyway, all six of these were setup using SQL authentication, and with the "sa" account.Five of them work flawlessly; the sixth fails to connect, and in the SQL log we see the following message:[code="other"]Login failed for user ''. Reason: An attempt to login using SQL authentication failed. Server is configured for Windows authentication only.Error: 18456, Severity: 14, State: 58.[/code]I am puzzled by this... not sure first of all why it's saying "Server is configured for Windows authentication only" because obviously that is not the case; the other five ODBC entries are pointing at the same server using the same account. Also, the fact that it's passing a blank username instead of "sa" is strange.I reentered the password multiple times; even tried deleting and recreating the ODBC entry. Same results. Finally, I was able to get it working by using Windows authentication instead of SQL authentication.So basically I am wondering if anyone has any ideas as to why it would be different for this connection and not the others. All six were previously pointing to a SQL 2005 instance, all with SQL authentication, and were working just fine. Now they are pointing to a new server running SQL 2012.

BLOCKING & PERFORMANCE ISSUES IN SQL SERVER 2012

Posted: 11 Jun 2013 06:54 AM PDT

Hi ,Last week we increased RAM and processors. now ,In our environment we have 2 processors,6 cores and hyper threading present ,so a total of 24 processors (6*2*2) available with 48GB RAM. I gave 28gb for sql as max server memory. in event viewer after a day i got insufficient memory to run the query. There are lot of blockings so that application gets hanged and nothing is working fine. Could anyone suggest me in which aspect i should check the performance of sql.Now i have increased as 38 gb out of total of 48 gb for sql server. Server:sql server 2012 enterpriseOS:Windows server 2012 standardIn our applications we have long running threads with many joins ,but before there was no issues like blocking .....we face once we increased RAM and processors ...Need to resolve blockings and run apps fastly . Any one please help me !!!!

Justify Hardware

Posted: 11 Jun 2013 06:30 AM PDT

This is most likely my biggest weakness at the moment, but I do not know how to go about justifying better hardware with some real facts that the non techs would understand. So what is the best approach to this. Not sure if this would help, but the basis system info: I am running a virtual SQL 2012 server 96GB RAM 8 cores.It is attached to a Equilogics SAN Array (15k disks)I am getting PATEIOLATCH_EX (you can add almost all the codes after the _)Surprisingly, partitioning actual speed the system up tremendously.Some stats:to delete 107,000 records it takes about 15 seconds batching 5000 at time (this seems long)to Select 107,000 and insert them into another table is take 12 seconds using the SSIS data flow componentSQL_Noob

PBM is coming up with errors when executing policies against groups in CMS groups

Posted: 11 Jun 2013 06:31 AM PDT

Hi All, I have SQL 2012 SQL server which I am using as central server for Central management server. Also I have created varous policies to manage the server which are mostly SQL 2008 or R2 or 2012. I am running the PBM (policy based management) tool as the SQL server scheduled job and its working for local server, however not working for servers under CMS . Policies are correct and can be run for local or for multiple remote server, if I execute this via management studio or manually. However When i run this as SQL server job, it keeps on failing. I have attached the error below. Also I have already created proxy server account for this and alreay checked permission for SQL server agent account on all servers. Configuration of SQL server is also checked for remote connection, TCP/IP enabled, named pipes enabled and for firewall. Every thing else works fine and I am able to connect to other server via CBM and from local server. However when I run this code , It keeps on failing with connectivity issue... Any idea will be really helpful. [color=red]Attach is the error message[/color] <DMF:Exception type="string">Microsoft.SqlServer.Management.Dmf.PolicyEvaluationException: Exception encountered while executing policy DATABASE MAINT - LastLogBackupDate. ---> Microsoft.SqlServer.Management.Common.ConnectionFailureException: Failed to connect to server . ---> System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)<?char 13?> at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject)<?char 13?> at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection)<?char 13?> at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory)<?char 13?> at System.Data.SqlClient.SqlConnection.Open()<?char 13?> at Microsoft.SqlServer.Management.Common.ConnectionManager.InternalConnect(WindowsIdentity impersonatedIdentity)<?char 13?> at Microsoft.SqlServer.Management.Common.ConnectionManager.Connect()<?char 13?> --- End of inner exception stack trace ---<?char 13?> at Microsoft.SqlServer.Management.Common.ConnectionManager.Connect()<?char 13?> at Microsoft.SqlServer.Management.Common.ConnectionManager.get_ServerVersion()<?char 13?> at Microsoft.SqlServer.Management.Common.ServerConnection.Microsoft.SqlServer.Management.Common.ISfcConnection.get_ServerVersion()<?char 13?> at Microsoft.SqlServer.Management.Sdk.Sfc.SqlStoreConnection.get_ServerVersion()<?char 13?> at Microsoft.SqlServer.Management.Dmf.ObjectSet.<CalculateTargets>d__1d.MoveNext()<?char 13?> at Microsoft.SqlServer.Management.Dmf.ObjectSet.CalculateTargets(IEnumerable objectSet, Condition condition, AdHocPolicyEvaluationMode evaluationMode, Object[]& conforming, TargetEvaluation[]& violating)<?char 13?> at Microsoft.SqlServer.Management.Dmf.ObjectSet.CalculateTargets(SqlStoreConnection targetConnection, Condition condition, AdHocPolicyEvaluationMode evaluationMode, String policyCategory, Object[]& conforming, TargetEvaluation[]& violating)<?char 13?> at Microsoft.SqlServer.Management.Dmf.Policy.EvaluatePolicyUsingConnections(AdHocPolicyEvaluationMode evaluationMode, SfcQueryExpression targetQueryExpression, Int64& historyId, ISfcConnection[] targetConnections)<?char 13?> --- End of inner exception stack trace ---</DMF:Exception> <DMF:ID type="long">1</DMF:ID> <DMF:Result type="boolean">false</DMF:Result> [color=red]PIECE OF CONNECTIVITY CODE[/color] # Evaluate specific Policies against a Server List # Uses the Invoke-PolicyEvaluation Cmdlet param([string]$ConfigurationGroup=$(Throw ` "Paramater missing: -ConfigurationGroup ConfigGroup"),` [string]$PolicyCategoryFilter=$(Throw "Parameter missing: ` -PolicyCategoryFilter Category"), ` [string]$EvalMode=$(Throw "Parameter missing: -EvalMode EvalMode")) # Parameter -ConfigurationGroup specifies the # Central Management Server group to evaluate # Parameter -PolicyCategoryFilter specifies the # category of policies to evaluate # Parameter -EvalMode accepts "Check" to report policy # results, "Configure" to reconfigure any violations # Declare variables to define the central warehouse # in which to write the output, store the policies $CentralManagementServer = " TEST" $HistoryDatabase = "TESTAd" # Define the location to write the results of the # policy evaluation. Delete any files in the directory. $ResultDir = "E:\PolicyManagement\EvaluationLogs\" $ResultDirDel = $ResultDir + "*.xml" Remove-Item -Path $ResultDirDel # End of variables #Function to insert policy evaluation results #into SQL Server - table policy.EPM_PolicyHistory function PolicyHistoryInsert($sqlServerVariable, $sqlDatabaseVariable, $EvaluatedServer, $ConfigurationGroup, $PolicyCategoryFilter, $EvaluatedPolicy, $EvaluationResults) { &{ $sqlQueryText = "INSERT INTO policy.EPM_PolicyHistory (EvaluatedServer, EvaluatedGroup, EvaluatedCategory, EvaluatedPolicy, EvaluationResults) VALUES(N'$EvaluatedServer', N'$ConfigurationGroup', N'$PolicyCategoryFilter', N'$EvaluatedPolicy', N'$EvaluationResults')" Invoke-Sqlcmd -ServerInstance $sqlServerVariable -Database $sqlDatabaseVariable -Query $sqlQueryText -ErrorAction Stop } trap { $ExceptionText = $_.Exception.Message -replace "'", "" } } #Function to insert policy evaluation errors #into SQL Server - table policy.EPM_PolicyHistoryErrors function PolicyErrorInsert($sqlServerVariable, $sqlDatabaseVariable, $EvaluatedServer, $ConfigurationGroup, $PolicyCategoryFilter, $EvaluatedPolicy, $EvaluationResultsEscape) { &{ $sqlQueryText = "INSERT INTO policy.EPM_PolicyHistoryErrors (EvaluatedServer, EvaluatedGroup, EvaluatedCategory, EvaluatedPolicy, EvaluationResults) VALUES(N'$EvaluatedServer', N'$ConfigurationGroup', N'$PolicyCategoryFilter', N'$EvaluatedPolicy', N'$EvaluationResultsEscape')" Invoke-Sqlcmd -ServerInstance $sqlServerVariable -Database $sqlDatabaseVariable -Query $sqlQueryText -ErrorAction Stop } trap { $ExceptionText = $_.Exception.Message -replace "'", "" } } # Connection to the policy store $conn = new-object Microsoft.SQlServer.Management.Sdk.Sfc.SqlStoreConnection("server=$CentralManagementServer;Trusted_Connection=true"); $PolicyStore = new-object Microsoft.SqlServer.Management.DMF.PolicyStore($conn); # Create recordset of servers to evaluate $sconn = new-object System.Data.SqlClient.SqlConnection("server=$CentralManagementServer;Trusted_Connection=true"); if ($ConfigurationGroup -eq "local") { $q = "SELECT @@servername;" } else { $q = "SELECT DISTINCT server_name FROM $HistoryDatabase.[policy].[fn_EPM_ServerGroupInstances]('$ConfigurationGroup');" } $sconn.Open() $cmd = new-object System.Data.SqlClient.SqlCommand ($q, $sconn); $cmd.CommandTimeout = 0; $dr = $cmd.ExecuteReader(); # Loop through the servers and then loop through # the policies. For each server and policy, # call cmdlet to evaluate policy on server while ($dr.Read()) { $ServerName = $dr.GetValue(0); foreach ($Policy in $PolicyStore.Policies) { if (($Policy.PolicyCategory -eq $PolicyCategoryFilter)-or ($PolicyCategoryFilter -eq "")) { &{ $OutputFile = $ResultDir + ("{0}_{1}.xml" -f (Encode-SqlName $ServerName ), (Encode-SqlName $Policy.Name)); Invoke-PolicyEvaluation -Policy $Policy -TargetServerName $ServerName -AdHocPolicyEvaluationMode $EvalMode -OutputXML > $OutputFile; $PolicyResult = Get-Content $OutputFile -encoding UTF8; $PolicyResult = $PolicyResult -replace "'", "" PolicyHistoryInsert $CentralManagementServer $HistoryDatabase $ServerName $ConfigurationGroup $PolicyCategoryFilter $Policy.Name $PolicyResult; } trap [Exception] { $ExceptionText = $_.Exception.Message -replace "'", "" $ExceptionMessage = $_.Exception.GetType().FullName + ", " + $ExceptionText PolicyErrorInsert $CentralManagementServer $HistoryDatabase $ServerName $ConfigurationGroup $PolicyCategoryFilter $Policy.Name $ExceptionMessage; continue; } } } } $dr.Close() $sconn.Close() #Shred the XML results to EPM_PolicyHistoryDetails Invoke-Sqlcmd -ServerInstance $CentralManagementServer -Database $HistoryDatabase -Query "exec policy.usp_EPM_LoadPolicyHistoryDetail" -ErrorAction Stop

system query being constantly run against instance, but dont know why

Posted: 11 Jun 2013 04:59 AM PDT

The following query is being run against my 2012 instance, once for each DB, on a very regular basis.[code="sql"]select table_id, item_guid, oplsn_fseqno, oplsn_bOffset, oplsn_slotid from [<db name>].[sys].[filetable_updates_XXXXXXXXXX] with (readpast) order by table_id[/code] XXXXXXXX is a number.I dont have filetable enabled on the server so cant see why sql might be running this. Google has not turned up much. Any ideas on this?

Login Failure due to Mirroring for Report Server

Posted: 11 Jun 2013 04:28 AM PDT

I have set up two SQL Server 2012 (Standard Edition) boxes running Report Server. I configured the two Servers (Server A and Server B) to be mirrored. All working just fine. I have failed over and failed back still everything working fine. What I have noticed that in the Server Log errors on Server B (mirror server) that I keep getting the error : "Login failed for user 'Service Login Account'. Reason: Failed to open the explicitly specified database 'ReportServer'. [CLIENT: <local machine>]." I can understand why this would happen as the Report Server is not available for connection but was wondering if this could somehow be turned off as it fills up the error log without any useful information. I tried stopping Report Server Services, while that stopped the error, it also dissconnected mirroring and I had to go back and start mirroring from scratch. I know mirroring for Report Server is a bit different than normal User databases but I would think there should be some way to mirror the Database without generating all these errors. Any insight or workaround would be greatly appreciated...Thanks in Advance...

[T-SQL] Identify postcode from multiple fields

[T-SQL] Identify postcode from multiple fields


Identify postcode from multiple fields

Posted: 11 Jun 2013 07:53 PM PDT

Hello,I have a table with five fields used to hold address data. Due to poor practices the users have been able to enter any data in any field. They have now asked me to find the post/zip code from each record for extracting to another application. Any tips on how I can identify the post/zip code and extract to a separate table?At the moment I have a VB6 application that uses regular expressions to perform this role, but I'm interested to see if I can now do this in T-SQL as I'd like to drop the VB6 app, or at a minimum replace it with something .NET based.I'm running an instance of SQL Server 2008 R2 to host the db.

Performance when predicate is &lt; Greater than ...

Posted: 12 Jun 2013 12:53 AM PDT

My table consists of 4 location IDs, each with a sensor reading value for each minute (approx) of the day, over a period of 1 month. (178,000 rows +/-)CREATE TABLE [CJ_LBSVTP].[IP21_Import_PowerTier]( [locationID] [char](3) NOT NULL, [stamp] [datetime] NOT NULL, [value] [float] NULL)... and my query has to create a start & end time from the above.So, if the table rows are thus ...Location Time Value1234 2013-03-01 00:01 981234 2013-03-01 00:02 97The results should be ...Location StartTime EndTime1234 2013-03-01 00:01 2013-03-01 00:02My query code to do this is below, but it is taking a horrendously long time to execute.Have been playing with indexes for some time now and simply cannot get a decent time out of this one.Are there any 'tips' when doing such a query utilizing a '<' in the predicate ?Many Thanks--------- Query Code ---SELECT AA.LocationID , AA.stamp AS startStamp , MIN(BB.stamp) AS endStampFROM myTable AS BB INNER JOIN myTable AS AA ON BB.LocationID = AA.LocationID AND BB.stamp > AA.stampGROUP BY AA.LocationID , AA.stamp

can any one give me the answer of my query why I am getting the error message

Posted: 11 Jun 2013 09:32 PM PDT

use [12]goCREATE TABLE Test1(TestID uniqueidentifier CONSTRAINT Test_TestID_Default DEFAULT newsequentialid(),Inserted datetime CONSTRAINT Test_Inserted_Default DEFAULT getdate())ERROR MESSAGE Msg 2714, Level 16, State 4, Line 1There is already an object named 'Test_TestID_Default' in the database.Msg 1750, Level 16, State 0, Line 1Could not create constraint. See previous errors.I am executing the upper query ,but the below error message shown .

Calculate Previous Business Day Exclude Saturday, Sunday and Holiday

Posted: 11 Jun 2013 09:41 PM PDT

Hi,My name is Kashif, I am very new in sql server, I want create a UDF function that will return a previous date (exclude saturday, sunday and holiday)Holiday's list in a table called 'tblHoliday'Please help me to get desired result.ThanksKashif

Can we call stored Procedure inside a function

Posted: 11 Jun 2013 08:57 PM PDT

Hi, Can we call stored Procedure inside a functionThanks

qry options

Posted: 11 Jun 2013 03:56 AM PDT

I want to do this.select count(*) from (select col1,col2,......col18 from REVemployee.tbempgroup by col1,col2,......col18 HAVING COUNT(*)>1)ais there a better way of doing the same for better proformance?

Simple MAX Query not working

Posted: 11 Jun 2013 07:05 PM PDT

Im confused on why this query is not working..Data -vend_i dept tot_wgt 1 1 752 2 403 4 503 5 80result I needed vend_i dept1 12 23 5Basically I need largest dept by each vendor which has served more...we can use tot_wgt column to find the large dept/can anyone please provide a query to get above result...plss.its very urgent

inline table function

Posted: 11 Jun 2013 05:09 AM PDT

I need to create function that returns a concatenated street.I first used a scalor funciton, but I read from internet inline table funciton is better.So how can I convert my scalor function to table function, how can I cal lthe functin by join other tables, also is it possible I can not only get one student street, also get a group of student's street by calling the function?I attached the script

Variable Kills Performance

Posted: 11 Jun 2013 01:33 AM PDT

Well I was stumped for a title to this, as I am stumped as to the cause !With the code below, the issue is in the where statement "LB.DB = @DB".As written, this variable is equal to the string 'NA', and the execution time is horrendous.However, if I hard code "LB.DB = 'NA'" , then the code executes in about 2 minutes, which is as expected.For the life of me, I cannot figure out WHY the execution is affected by the use of a variable to carry in the required 'where' value.If it helps, the field [DB] is a char(2), originating 3 levels of nesting down in a set of views.Any insights as to what is happening here would be appreciated !!Many ThanksSimon------------------------------------------------------------------- Declare @DB as char(2)select @DB = 'NA'SELECT LB.DB, LB.CaseKey AS [Case Key], LB.Month AS [Month] , LB.RollUpClass AS [Product], LB.SourceOfProduct as [Plant] , LB.DeliveredQty AS [Delivered Qty]into #LBFROM [CJ_LBSVTP].[3_LB_ProductSourcePlant] AS LB WHERE (LB.CaseKey = 8751) AND (LB.Month = CONVERT(datetime,'2012-10-01')) And (LB.DB = @DB)

How to generate Sequence numbers in a temporary table?

Posted: 11 Jun 2013 04:18 AM PDT

I am trying to extract data from multiple table joins into a temporary table, so that I will be able to count Occupancy and Vacancy parking transactions based on starttime. The problem I am having is the sequence of occurrences for each transaction in not accurate in the database. My solution is to create a temporary table and generate my own sequence numbers. I did generate my own sequence numbers, however the numbers are not in sequence and I am also getting duplicates.See sample code below as well as data output (Excel Spreadsheet attached) with inconsistent sequence numbers. If anyone knows how to fix this issue, please provide an example. As you can see, the SN numbers keep starting over at 1 as soon as it gets to a certain point.Any help would be greatly appreciated.SAMPLE CODE IF OBJECT_ID ('tempdb..#Tmp1') IS NOT NULL DROP TABLE #Tmp1 --all transactions + SNCREATE TABLE #Tmp1( [SN] [int] , [ParkingSpaceId] [int], [MeterId] [int], [BlockFaceID] [int], [occupancystatus] [int], [StartTime_PT] [datetime], [LastSensorEvent_PT] [datetime], [State] [nvarchar](50) ) insert #Tmp1SELECT ROW_NUMBER() OVER (PARTITION BY ST.ParkingSpaceId ORDER BY StartTime) AS SN, ST.ParkingSpaceId, MeterId, [BlockFaceID] , [occupancystatus], st.StartTime as StartTime_PT, PS.LastSensorEvent as [LastSensorEvent_PT], ST.[State]FROM SensorTransactions STjoin ParkingSpaces ps on ps.ParkingSpaceId=ST.ParkingSpaceIdWHERE BlockfaceId = 996 AND dateadd(HH,-7,Starttime) between '2013-05-28 08:00:00.000' and '2013-05-28 20:00:00.000' AND state in('Occupied','vacant')ORDER by starttimeselect * from #Tmp1 OUTPUT - Excel Spreadsheet is also attached.SN ParkingSpaceId MeterId BlockFaceID occupancystatus StartTime_PT LastSensorEvent_PT State1 6366 5775 996 1 5/28/13 10:21 6/10/13 19:57 OCCUPIED2 6366 5775 996 1 5/28/13 10:22 6/10/13 19:57 VACANT3 6366 5775 996 1 5/28/13 10:51 6/10/13 19:57 OCCUPIED1 6367 5776 996 1 5/28/13 8:44 6/10/13 16:44 OCCUPIED2 6367 5776 996 1 5/28/13 8:50 6/10/13 16:44 VACANT3 6367 5776 996 1 5/28/13 11:01 6/10/13 16:44 OCCUPIED1 6370 5779 996 1 5/28/13 8:12 6/10/13 19:37 OCCUPIED2 6370 5779 996 1 5/28/13 8:14 6/10/13 19:37 VACANT3 6370 5779 996 1 5/28/13 9:45 6/10/13 19:37 OCCUPIED4 6370 5779 996 1 5/28/13 9:45 6/10/13 19:37 VACANT5 6370 5779 996 1 5/28/13 9:56 6/10/13 19:37 OCCUPIED1 6371 5780 996 1 5/28/13 8:19 6/10/13 16:39 OCCUPIED2 6371 5780 996 1 5/28/13 17:59 6/10/13 16:39 VACANT3 6371 5780 996 1 5/28/13 18:02 6/10/13 16:39 OCCUPIED1 6372 5781 996 1 5/28/13 10:38 6/10/13 19:21 VACANT2 6372 5781 996 1 5/28/13 10:54 6/10/13 19:21 OCCUPIED3 6372 5781 996 1 5/28/13 12:53 6/10/13 19:21 UNKNOWN4 6372 5781 996 1 5/28/13 13:19 6/10/13 19:21 OCCUPIED1 6373 5782 996 1 5/28/13 8:27 6/10/13 19:31 OCCUPIED2 6373 5782 996 1 5/28/13 8:30 6/10/13 19:31 VACANT3 6373 5782 996 1 5/28/13 8:54 6/10/13 19:31 OCCUPIED

Contains fulltext search (contains) does not work within a stored procedure.

Posted: 11 Jun 2013 04:53 AM PDT

Greetings,The server info is shown below:Microsoft SQL Server 2008 (SP2) - 10.0.4000.0 (X64) Sep 16 2010 19:43:16 Copyright (c) 1988-2008 Microsoft Corporation Standard Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)I have written a stored procedure to be used with a report that must search several columns for keywords. The keywords are stored in a table and I'm declaring a variable to load them and prep the search string as seen below:[code="sql"]declare @Keywords varchar(8000)=N''select @Keywords=(select '|' + char(34)+'*'+keyword+'*'+char(34) from LocalM5.dbo.TMKeyword where keyword<>'T&M' FOR XML PATH(''))SELECT @Keywords = STUFF(@Keywords, 1, 1, '')select * from TMQueue t1 WHERE CONTAINS ((t1.[cust_prob_descr],t1.[MyNote]),@Keywords)[/code]If this is executed in a procedure or stand-alone anywhere after other statements that execute such as populating the table "TMQueue" the query will return no results. If I populate that table and just execute the block from declare to the select statement I get data. So, the first thing I did was print the value for @keywords and build the statement so it executes with no variables like:[code="sql"]select * from TMQueue t1 WHERE CONTAINS ((t1.[cust_prob_descr],t1.[MyNote]),'"*Billable*"|"*Boom*"|"*Bucket*"|"*Cable*"|"*Coax*"|"*DOA*"|"*Fiber*"|"*Fluke*"|"*Ladder*"|"*Lift*"|"*Lightning*"|"*Materials*"|"*Not Onsite*"|"*Rental*"|"*Scissor*"|"*Scope*"|"*Water*"')[/code]Even this does not work within a procedure or if called after any statements that execute before it. I have never seen an issue like this and I don't understand why it executes fine if executed "alone" but not as part of a block.

Stored proc exit

Posted: 11 Jun 2013 02:18 AM PDT

Hello,I am new to tsql coding and I am in the process of adding a piece of code to an already existing stored proc. Here is my codeselect @Count1=count(*) from [test] where [testcol] = 10select @Count2=count(*) from [test1] where [testcol] = 10if @Count1>0 or @Count2>0then exit the storedproc and dont process any further code in the stored proc. If only the condition fails above the stored proc shd be process. How do I achieve this?Thanks

Search This Blog