Thursday, 26 August 2010

Why emails from GMail have formatting issues in Outlook?

At work we are starting to move over to Google Enterprise Apps. During the process we have come across email formatting issue when emails are sent to users on Outlook (2007).


The root cause of this issue seems to be with Outlook and it's poor HTML support. More specifically its poor support of CSS and inline styles.


As far as I can tell the GMail text editor applies font size using the style attribute on a <span> and font face using CSS by default which Outlook doesn't handle very well and applies a default size. The font face is applied using CSS which Outlook doesn't support very well (if at all) so Outlook uses what ever it has set as the default (may be Time New Roman).  


Using the 'Default Text Styling' lab helps with the main text. As long as you don't change the font size in the text editor.

e.g.:
<span class="Apple-style-span" style="font-size: x-small;">asdasdasd</span>


Where as the Default Text Styling lab uses <font> which works better in Outlook


e.g.:
<font size="2"><font face="verdana,sans-serif">asdasdasd</font></font>


You can see this by clicking view source in the web browser.


Any workarounds?

Tuesday, 3 August 2010

What is the Google "MayDay" update?

The label the SEO community have given some recent Google long-tail search algorithm changes. "MayDay" because it was spotted around the 1st May 2010. Below are some attributes of this change.
  • Affects long-tail searches.
  • An algorithmic change rather than a crawler or indexing.
  • It's about how Google assess the best quality matches.
  • Great content and authority are still valid.
  • Nothing directly to do with the Caffeine update, side by side.
  • Not a manual process, no humans involved, fully automated.
  • Intended to be a permanent change.

Matt Cutts on the topic




Tuesday, 13 July 2010

My Amazon AWS wish list

My Amazon AWS wish list goes a little something like this


EC2:
  1. Instance notes field: Ability to attach some custom info such as a server name, role, and/or description to an EC2 instance. (Amazon EC2 Resource Tagging)
  2. Multi user + groups for access control. (AWS Identity and Access Management (IAM) )
  3. Custom human friendly machine name/alias field, in addition to instance id (Not tags). This is half there. The "Name" tag is special and appears in other views.
  4. Post launch parameter change: Ability to change certain EC2 instance parameters after launch such as security groups (avail in VPC).
  5. More  traditional and logical EC2 firewall security grouping system: security groups don't work how you would expect.
  6. Multiple vNIC / IP address support.
  7. Packet sniffing - Firewall level traffic visibility: to help troubleshoot connectivity issues.
  8. Quicker support for significant Windows Server releases such as Windows Server 2008 R2 (still not available).
  9. Windows support for GPU instances.
  10. Description field for Elastic IPs
ELB:
  1. Host header support for ping health check i.e. ping uses a fully qualified URL and passes the hostname http header.
  2. Elastic IP support.
  3. Support for load balancing internal/backend services.
Other:
  1. DNS service. (Amazon Route53)
  2. Console support for Identity and Access Management.
  3. Billing breakdown/view by resource group/tag
  4. Elastic Beanstalk .Net/Mono support 
  5. More open information on peering and/or have more direct Internet connectivity options via ISPs as a product (not private connectivity). Now available see AWS Direct Connect
  6. Move instances/resources between accounts without copying or some other form of logical grouping containers, billing containers, and security containers.
  7. Ability to copy AMIs between regions.
  8. Copy an RDS backup to S3 in another region.
Last updated: 02/07/2014

Wednesday, 7 July 2010

What can I shift to "the Cloud"?

Over the last couple of years "Cloud" has been the buzz word and been gathering some serious traction. This is a form of outsourcing obviously. It is also the latest incarnations of the old buzz words Application Service Provider (ASP) and Utility Computing. And the next evolutionary stage of concepts such as managed services, grid computing, Virtualization, co-location, and hosted services. You could even say a Cloud Computing service is a compositions of all of these and a bunch of value add. There's no genius insight there, you knew all of this already - right?.

At some point everybody in business (and I mean it should be everybody) should be wondering if they could gain some leverage from "the Cloud". What you are really asking yourself then is "what can/should I outsource?" You could also think of the answer to the above question as justification for outsourcing something rather than servicing it in-house. At least this is how I arrived at this question.

What I've concluded is that anything and everything that is not your core business can and should be outsourced hence shifted into a Cloud service if one exists for the function/skill/technology in question. I've heard that some VCs insist on the use of cloud services.

you core business = your secret weapon/differentiators/intellectual property => you should be in full control   

For example for most companies running an email system such as Exchange or Lotus notes in-house is not a business differentiators. Rather a back office system or a tool that is required to conduct business like a pen or electricity, a means to an end. The same applies to phones, PC O/S, server hosting, CRM software, and ERP software.


There are many more questions to be asked but most of these are to answer the question "which cloud service?", that's for another post.


This is obviously a simplified view on the topic also directly or indirectly the sales pitch for many  cloud service providers. Here are a few links with further information.

http://www.businessknowhow.com/startup/outsource.htm
http://www.doublecloud.org/2010/07/when-not-to-use-cloud/
http://www.entrepreneur.com/humanresources/hiring/article206226.html

UPDATE:

Just seen a really good post The end of bespoke by Matt Ballantine  where he goes into a lot more detail although coming at it from a different direction.

Tuesday, 22 June 2010

Amazon EC2 - Security Groups don't work the way you expect

Nested security groups, sounds great, exactly what you need to organise you security rules. However doesn't appear to work as you would expect, as the grouping concept work in other software products.

So far I don't fully understand how it is supposed to work and not found that key piece of documentation either. What I do understand at present is that rules within nested security groups (that are not applied to any EC2 instances) do not apply to an EC2 instance that has the parent group applied.

For example if I had a security group called 'sec-group-A' which contains 'sec-group-B' and 'sec-group-B' has one rule that allowed RDP from 0.0.0.0/0. Now apply 'sec-group-A' to an EC2 instance (a windows instance), you will not be able to connect using RDP. If you add the RDP allow rule directly to 'sec-group-A' it will.

So what is the point of grouping - obviously there is a usage? When used within another group they are treated as tags which identify other EC2 instances (?)

http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/index.html?concepts-security.html

http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/index.html?using-network-security.html

http://somic.org/2009/09/21/security-groups-most-underappreciated-feature-of-amazon-ec2/

http://developer.amazonwebservices.com/connect/thread.jspa?threadID=36513

http://www.shlomoswidler.com/2009/06/tagging-ec2-instances-using-security_30.html

http://aws.typepad.com/aws/2010/06/building-three-tier-architectures-with-security-groups.html

Wednesday, 9 June 2010

Install and configure the Amazon RDS Command Line Tools

On Windows XP


Installation:
1. Ensure that JAVA version 1.5 or higher is installed on your system: (java -version). Java SE 1.6 works.
2. Download the latest deployment zip file from here and unzip in "c:\program files\amazon\aws\rdscli" on windows.
3. Set the following environment variables:
3.1 AWS_RDS_HOME - The directory where the deployment files were copied to
        check with: dir %AWS_RDS_HOME%\bin should list rds-describe-db-instances ...)
3.2 JAVA_HOME = "C:\Program Files\Java\jre6" (Java Installation home directory).
3.3 EC2_REGION = eu-west-1
3.4 EC2_URL = http://%EC2_REGION%.ec2.amazonaws.com/
4. Add "%AWS_RDS_HOME%\bin" to your path.


Configuration:
Provide the command line tool with your AWS user credentials. There are two ways you can provide credentails: AWS keys, or using X.509 certificates.


Using AWS Keys:
1. Create a credential file: The deployment includes a template file %AWS_RDS_HOME%/credential-file-path.template. Edit a copy of this file to add your information.


2. There are several ways to provide your credential information:
      a. Set the following environment variable: AWS_CREDENTIAL_FILE=<the file created in 1> e.g. AWS_CREDENTIAL_FILE = %AWS_RDS_HOME%\credential-file-path.template
      b. Alternatively, provide the following option with every command --aws-credential-file <the file created in 1>
      c. Explicitly specify credentials on the command line: --I ACCESS_KEY --S SECRET_KEY
      
Using X.509 Certs:
1. Save your cetificate and private keys to files: e.g. my-cert.pem and my-pk.pem.


2. There are two ways to provide the certificate information to the command line tool:
    a.  Set the following environment variables:
        EC2_CERT=/path/to/my-cert.pem
        EC2_PRIVATE_KEY=/path/to/my-pk.pem
    b.  Specify the files directly on command-line for every command:
        <command> --ec2-cert-file-path=/path/to/my-cert.pem --ec2-private-key-file-path=/path/to/my-pk.pem


Running:
Check that your setup works properly, run the following command:
   $ rds --help
      You should see the usage page for all RDS commands.
   $ rds-describe-db-instances --headers
      You should see a header line. If you have database instances already configured, you will see a description line for each database instance.
      

Monday, 26 April 2010

If InnoDB Then lower_case_table_names Equals 1

In MySQL system variable lower_case_table_names controls if identifiers are case sensitive. The default setting is 0 if the OS file system is case sensitive, if not 1 - there some exceptions when it comes to Mac OS. However the documentation recommends lower_case_table_names = 1 when using the InnoDB engine.


"Exception: If you are using InnoDB tables, you should set lower_case_table_names to 1 on all platforms to force names to be converted to lower case."


http://mysql2.mirrors-r-us.net/doc/refman/5.1/en/identifier-case-sensitivity.html


UPDATE: As it is on 15/06/2010 this parameter cannot be changed on Amazon RDS instances. It is set to lower_case_table_names=0 it would seem.

Sunday, 18 April 2010

Installing/Upgrade an official release of Django on Windows

Google App Engine SDK 1.3 includes Django 0.96 however App Engine itself supports 1.1 so if you wish to develop against Django 1.1 you will have to manually install it as follows.
  1. Delete your Django site-package from your Python site-packages folder (typically <python-install-dir>/lib/site-packages).
  2. Download the latest release from our download page.
  3. Untar the downloaded file (e.g. tar xzvf Django-NNN.tar.gz, where NNN is the version number of the latest release). You can download the command-line tool bsdtar to do this, or you can use a GUI-based tool such as 7-zip.
  4. Change into the directory created in step 2 (e.g. cd Django-NNN).
  5. If you're using Linux, Mac OS X or some other flavor of Unix, enter the command sudo python setup.py install at the shell prompt. If you're using Windows, start up a command shell with administrator privileges and run the command setup.py install.

Wednesday, 7 April 2010

Amazon RDS - Importing a database using the MySQL Administrator tool

Using the backup and restore feature of the MySQL Administrator Tool does seem to work against Amazon RDS instances. I've done a basic test importing a very small database containing a handful of tables (including a blob fields), a trigger, and indexes. However I came across the following error initially. 


"Error while executing this query:DROP TRIGGER /*!50030 IF EXISTS */ `your_trigger_name`;
The server has returned this error message:You do not have the SUPER privilege and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable)
MySQL Error."


Started going down the SUPER privilege route as most people would I would imagine. I discovered very quickly that you cannot change user privileges on Amazon RDS instances. Looking at Amazons recommendations for importing data I found they recommend switching off automated backup and binary logging, during import procedure, to improve import performance and reduce storage requirements - full details at 
Amazon RDS Customer Data Import Guide for MySQL 


It worked like a charm!


HOWTO: Disbale Automatic Backup and Binary Logging in Amazon RDS


C:\>rds-modify-db-instance <your_DbInstanceIdentifier> --backup-retention-period 0 --apply-immediately


This will apply the change immediately meaning the database will be unavailable while it applies and reboots the instance. This took about 5min on an empty small instance.


C:\>rds-modify-db-instance <your_DbInstanceIdentifier> --backup-retention-period 0


This will apply the change during the next schedules maintenance window


C:\>rds-describe-db-instances --headers


To check if the change has been applied.


Don't forget to switch on automatic backup after the import procedure is complete by changing the backup retention period to a value greater than 0.


You need the Amazon RDS Command Line Toolkit installed for the above command to work.

Thursday, 18 March 2010

Serialization issue with MySqlDateTime in a DataSet

If you encountered the error "Unable to convert MySQL date/time value to System.DateTime" this article is related. So is this one .Net Exception: Unable to convert MySQL date/time value to System.DateTime


-- UPDATE 2: And here's the final answer. set Allow Zero Datetime=false (the default i think) in your connection string. And make sure you don't put meaningless zero valued dates into your database and all works as it should. Datetime and null values are inserted and selected successfully from date columns. Serialization works.

-- UPDATE 1: Having looked into this a bit more it would appear that System.DateTime is unable to represent a zero date i.e. 0/0/0000 00:00:00 (which makes no sense anyway). I suspect for legacy / backward compatibility reasons MySql Connecter has to represent this, therefore the reason for MySqlDateTime existing. Would make sense if it used System.DateTime when Allow Zero Datetime = false, not tried this yet.

------------

While adding MySQL support to our DAL I came across a DataSet serialization issue. We have an object that represents a stored search. This object is hydrated by passing it a serialised version of a dataset representing a search returned from the database.

MySqlDataAdapter seems to setup any column if type Datetime as MySql.Data.Types.MySqlDateTime rather than System.DateTime - as is the case with the SQL Server and Oracle providers. So far I've not found any options to control this or an explanation as to why the MySql ADO.net provider had been implemented as such. This hasn't posed a significant issue till I've had to serialize the DataSet to find that values for fields of type MySqlDateTime serialize differently to System.DateTime. Rather than representing the date value as simple string it is a chunk of XML. This XML (which is the inner XML of the field element. default behaviour) contains an XML decleration and is entity escaped. Therefore not valid and one cannot do much with it.

In order to get around this problem I implemented my own code to populate a dataset from a DataReader which converts all MySqlDateTime columns to System.DateTime in the DataTable and then populates the datatable.

I will be taking a closer look at the provider code with the hope to at least understand why the MySqlDateTime type is required.

Note: I'm using MySQL Connector.Net 6.1.1 here.

Public Shared Sub FillDataTable(ByRef DataTable As DataTable, ByVal Command As MySqlCommand)

Dim r As MySqlDataReader
'Dim DT As DataTable
If DataTable Is Nothing Then
DataTable = New DataTable
End If

Try

r = Command.ExecuteReader

'-- Create the schema
Dim col As DataColumn
For i As Integer = 0 To r.FieldCount - 1
col = New DataColumn
col.ColumnName = r.GetName(i)
If r.GetFieldType(i) Is GetType(MySqlDateTime) Then
col.DataType = GetType(System.DateTime)
Else
col.DataType = r.GetFieldType(i)
End If
DataTable.Columns.Add(col)
Next


'-- Populate the datatable
Dim row As DataRow
While r.Read
row = DataTable.NewRow
For Each c As DataColumn In DataTable.Columns
Dim colName As String = c.ColumnName
row.Item(c) = DBNull.Value '-- default to DBNull

If Not r.IsDBNull(r.GetOrdinal(colName)) AndAlso _ r.GetFieldType(r.GetOrdinal(colName)) Is GetType(MySqlDateTime) Then
If r.GetMySqlDateTime(colName).IsValidDateTime Then '-- not Zero
row.Item(c) = r.GetDateTime(colName)
End If
Else
row.Item(c) = r.Item(colName)
End If
Next
DataTable.Rows.Add(row)
End While

If r IsNot Nothing Then
r.Close()
r = Nothing
End If
Catch ex As Exception

End Try
End Sub

Saturday, 13 March 2010

MySQL INSERT & UPDATE date mask

Use 'yyyy-MM-dd  HH:mm:ss' as the date mask (i.e. format the date using this format string) in insert and update statements.

.Net Exception: Unable to convert MySQL date/time value to System.DateTime

When using DataAdapter.Fill()

I encountered the exception "Unable to convert MySQL date/time value to System.DateTime" while I was implementing MySQL support to our .Net data access component.When return the results(into a DataTable) from a select statement which include a date/time column that has a 0/0/0000 00:00:00 value i.e. you inserted zero.

You can avoid this exception either by 

1) adding the parameter 'Allow Zero Date=true;' to your connection string (worked for me)
2) setting the values to a none zero date (kind of obviously)
3) setting the values to null rather than zero.

solutions 2) and 3) are probably more suited for new databases where as 1) if you are planning on migrating a bunch of data from else where.

When implicitly conversting / cast  a Datetime DataTable column to string

Once I got past the above I hit the exception "Conversion from type 'MySqlDateTime' to type 'String' is not valid" while trying to implicitly convert the zero datetime value (i.e. 0/0/0000 00:00:00) in the DataTable date column.

I solved this by using the .ToString.

Worth mentioning that this exact code run against MS SQL Server and Oracle Database without encountering the above issues.

Thanks to Aleksandar's Blog
http://vucetica.blogspot.com/2009/01/unable-to-convert-mysql-datetime-value.html

and Tjitjing Blog
http://blog.tjitjing.com/index.php/2007/04/unable-to-convert-mysql-datetime-value.html

Wednesday, 3 February 2010

Tips on modelling your CRM data in Salesforce

  • Try and fit your data into the standard Salesforce object and their relationships.
  • Do not assume that the data structure in your current system is the best or the only way. Keep an open mind.
  • Think about how your processes and work flow will fit - data views, data ownership, data security, work allocation etc.
  • It is well worth talking though various usage scenarios, over a WebEx/GoToMeeting session, with Salesforce staff.
  • When possible use existing fields, check carefully before creating new custom fields. Using an existing field will mean more out of the box functionality later.
  • Create ExternalID fields on all relevant objects (including the User object) if you are importing data into Salesforce from another CRM system.
  • Do plan to import your user login accounts.