Course reflection: Extending Microsoft Dynamics CRM 2011

Last week i spent 3 days on a CRM course, it is about extending Microsoft Dynamics CRM 2011.

For those of you who wonder what Microsoft Dynamics CRM is, click here.

Already with a background in MS Dynamics CRM 2011, had done quite a bit of work to customize and configure the system. So this course feels really logical for the next step. From my past projects I have also run into a few bugs / problems that also was solved in the course.  Our instructor Darko Jovisic was very competent and his blog showed us many useful tools along with some usage instructions.

Overall, the course covered the following topics:

  • Querying the MS Dynamics CRM
    • QueryExpression
    • LINQ
    • OData
  • Create Custom Workflow
  • Create Plugin
  • Create / Update Ribbon buttons
  • Modify Site Map
  • Web Resources

To make the most out of this course, the participants should at least know about MS Dynamics CRM 2011 (Either have used it before as an end user or customized it before), and can do programming in C# or Javascript.

This course opened up a lot of doors for me which allows me to design better solutions when it comes to matching customer need in automating bank processes.  I will come back later when I have something more concrete.

After taking this course, my thought on using MS CRM 2011 as a supplement to BI marketing system just got reinforced. MS CRM 2011 is a great system to handle the business decisions done with data analysis (Like to execute a proactive customer service process after a churn analysis).



Leave a comment

Filed under Course Reflection, CRM

KDD – Creating a target dataset

This post is part of a series that describes the KDD (Knowledge Discovery in Database) process. It outlines the process which is needed to gain meaningful, non-trivial knowledge about the ever growing database in businesses.


KDD stands for “Knowledge discovery in database”, needless to say, the most important aspect of the process is the database itself, or the dataset.

A dataset could be as big as a combination of multiple data sources and multiple databases, it could also be as small as a few columns/rows of a database table.  It all depends on the goal of KDD, what would you want to find out?

Is dataset the bigger the better? Definitely no, the more data you have than you need, the more likelyhood for noise and inaccuracy of analysis. What’s the point of including the data from the whole country if you only want to find out the shopping habit of the California population? Another problem is performance, more columns and more rows mean more time to analyze, in the case of unnecessary data it would only burden the system.

On the other hand, we would want to include all that we need, exactly what we need.  That is why the understanding of the domain is so important.

Technically, there should be no limit to how many sources we can include in the dataset, it is simply a job for the technical personnel.  One can for example use Microsoft’s SSIS to develop a job to merge the data sources into a central database where analysis happen. Merging, cleaning, may be required, which will be discussed in the next section.


Some of the information in this blog post is based on the article: “From Data Mining to Knowledge Discovery in Databases” by Usama Fayyad, Gregory Piatetsky-Shapiro, and Padhraic Smyth

Link to the original article

Leave a comment

Filed under Theory

KDD – Understand the application domain

This post is part of a series that describes the KDD (Knowledge Discovery in Database) process. It outlines the process which is needed to gain meaningful, non-trivial knowledge about the ever growing database in businesses.


As mentioned in the KDD overview post, the first step in knowledge discover in database is to understand the application domain. So what does that mean?

Simple enough, it basically suggest that you must know about the business before starting to dive into the business data. In another words, this suggest that managers should think twice before hiring someone with plenty of BI related certification and have them dive into the data and return with many useful knowledge. It was never that easy.

Having done IT consulting work (formal and informal) in almost a decade. I was and still am surprised how little attention clients paid attention to the domain knowledge an IT consultant possess.  Though a large development team sometimes can compensate for a lack of domain knowledge in some of its members, but time and time again elementary error has surfaced due to the lack of domain knowledge by the developer and the assumption a business analyst has made that it was suppose to be common knowledge.

In the business intelligence area, domain knowledge is extremely important. As the business intelligence developer holds the key to the database (or the datawarehouse as many people like to call today) , one cannot expect to gain meaningful result if one isn’t able to identify meaningful data. As the good old saying: “How can I tell you something if you don’t know anything?” Imagine an IT worker building an annual report to illustrate the income and projected income for a corporation without knowing anything about accounts receivable/payable, or the concept about accrual vs cash accounting. It would be a disaster.

However, in smaller IT market like Norway, it was hard enough to come by an experienced BI developer, let alone one with the matching domain knowledge, so what do business do?

In this case, a tight dialog between someone who knows the industry well and the BI developer is essential. A tight followup throughout the KDD process to ensure business knowledge is injected into the analysis done by the BI developer.

A short history of my past project:

When I was working as a business analyst for a company who does risk analysis on oil & gas pipes. I was selected to suggest them a good reporting tool, and then create a few standard report for them to resell to its clients. With multiple years of experience in the Microsoft BI stack, and equal amount of experience in the financial world with emphasis on marketing, my hands were tied as I was going through the data. Corrosion, Anode, Section, Depth are just all new concept to me. Even after many meetings I was still left puzzled as I was preparing the sample report for them. Luckily it was a report that is suppose to include all the existing data and the end user were to narrow it down to their liking. Tableau also offered this nice “Web Authoring” ability for end user to change the report ad hoc to accommodate their own need.  Had it been a detail report for analysis purposes, I would definitely had failed the challenge.


Without the basic understanding of the domain knowledge, one can hardly make a meaningful analysis of the business data.  BI developer should either possess the necessary domain knowledge to implement the KDD process, or it should be a joint effort with a business analyst who serve as a guide to the BI developer.


Some of the information in this blog post is based on the article: “From Data Mining to Knowledge Discovery in Databases” by Usama Fayyad, Gregory Piatetsky-Shapiro, and Padhraic Smyth

Link to the original article

Leave a comment

Filed under Theory

Imagine – The future of BI – Consumer’s perspective

Please allow me to dream a little bit, but the thought that the following can become a reality within my lifetime just excites me:

Before we start, let’s list a few trends in the current world of BI:

-Size of data growing faster and faster (along with the investment as well)

– Data flying up to the cloud (by using the investments)

– Data access via mobile devices (phones, tablet are simply one of the ways to present the data)

– (Please leave a comment if I am leaving out anything important)

So we have a world with data everywhere, and moneys are pouring into the BI market in order to do something meaningful to the growing data on the cloud. How would the result be?


1. In a cold winter morning in Oslo, I woke up and the bathroom floor is pre-heated moments before my first usage. It is possible because my bathroom visit pattern was analyzed along with my calendar schedule.  It saves energy and doesn’t cause any discomfort.

2. As I was brushing my teeth, my mirror shows me some of the news titles which I can select by touch and save for later review. It will also shows me the important info like the snow would most likely cause a delay in the train schedule and suggest me an alternate transportation plan to work.

3. Suggestion for breakfast (lunch and dinner as well) listed out on the screen on the fridge door that would suggest the best balance on nutrition and personal taste. (I have read on some site that it even suggest a grocery list based on consumption rate)

4. During the train/bus ride, the mobile device picks out tasks that would be small enough to complete during the ride, increasing efficiency.

5. You arrive at work, and your workstation pulls out the document that you were working on at home, with the cursor at exactly where you left off.

6. You walked into a meeting room with clients, and your glasses briefs you with basic information about the people across the table.

7. You bring your report to the meeting room and your colleague brings his analysis in, in a finger drag and the report joins together as one.  Across the table sits the client and by typing in a few business requirements the end result gets approved. The recording device makes the meeting minute and sends out to everyone in the meeting as a reference.

8. In the evening, you turn on the TV and it shows you a few TV programs on the split screen based on your preferences. Your wristwatch monitors your stress level to show you programs that matches your body condition.

9. Before you sleep, your wristwatch takes a blood sample and offer you a few medical advice while making an appointment with a doctor should it be necessary.

Ok, there was my daydreaming of the day.  In fact, most of the technologies exists today. With the cloud, data flows freely between devices.  Displaying digital information on mirrors are not news either. Google glass demonstrates the type of device that we saw in Dragonball anime 20 years ago.  Smart TV today already possess functions like a computer, it can connect to the internet, memorize your favorite channel, and record the program in the background while you are away.  Samsung gear forms the foundation of smart watches that can do a lot more than just to show time.

Knowing that, I can just sit back and relax because I know my imagination will become reality one day.

Leave a comment

Filed under Business Application, Theory

KDD – Overview

It stands for: Knowledge Discovery in Database

As you may have read from my previous post, the wikipedia definition of data mining is the analysis step of the process of Knowledge Discovery in Database, shortened to KDD. So what is KDD?

KDD is a process that turns the data we have gathered into knowledge in plain language that both me and you will understand. It comprises of 9 stages.


The list below gives a brief overview of the steps, in the later post I will go into the steps in further detail:

1. Understanding of the application domain: In short, one must simply have some basic knowledge of what one is analyzing. (The classical quote: If you don’t know anything, how can I tell you something?)

2. Create target dataset: The data foundation, without a database, it wouldn’t be possible to gain knowledge from it.

3. Data cleaning and Preprocessing: Removing “dirty” data, rows that having missing key attributes, attributes that include inappropriate values. It reduces noise in analysis results.

4.Data reduction/projection: Not all attributes in the data set need to be considered for analysis. Irrelevant attributes can be eliminated at this point.

5. Defining data analysis method based on the defined goal: Is that regression, custering, classification that is needed to analyze the data?

6. Model and hypothesis selection: Based on the business requirements, one chooses the data mining model that best can generate the desired result, and the parameters along with it.

7. Data mining: Execute the defined model with the defined parameters.

8. Interpret result: Examine the results, look for patterns, if no clear pattern has shown, consider redo step 1 to 7 to refine the business requirements, data, and the data mining method.

9. Action: Act upon the discovered knowledge, either apply the finding directly into the next decision meeting, or document the report of the finding to the interested parties.

With the steps described above, one should be able to gain meaningful, non-trivial knowledge about the ever growing database everyone is building.


Information in this blog post is based on the article: “From Data Mining to Knowledge Discovery in Databases” by Usama Fayyad, Gregory Piatetsky-Shapiro, and Padhraic Smyth

Link to the original article

Leave a comment

Filed under Theory

What is data mining?

Wikipedia definition:

Data mining (the analysis step of the “Knowledge Discovery in Databases” process, or KDD),[1] an interdisciplinary subfield of computer science,[2][3][4] is the computational process of discovering patterns in large data sets involving methods at the intersection of artificial intelligence, machine learning, statistics, and database systems.[2] The overall goal of the data mining process is to extract information from a data set and transform it into an understandable structure for further use.[2] Aside from the raw analysis step, it involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating.[2]


My definition: The process of obtaining non-trivial knowledge of a large data repository.

Data mining differs from data analysis, from a large database, one can easily query information by selecting certain fields, aggregate the data for sum, max, min etc. Performance depends on the way the data is stored, we have the traditional relational database (Where minimizing storage space is prioritized) and today’s data warehouse storage (where query performance is prioritize, since disk space is cheap nowadays).

The word mining and non-trival goes well together. For querying trivial knowledge from database, it would almost be fetching an object from you pocket. You know it is there, and you know exactly how to get it. However, when you mine for something, you sweat and you are unsure of what you get is really what you are hoping for. There will be an analysis of the site, take surveys of the rock composition and run analysis, from there one make further hypothesis regarding the rock composition of the surrounding area, then take a few more surveys and improve on the hypothesis repeatedly until we are fairly sure at our hypothesis. Then we use the hypothesis to lead us to the place we will set up a mine and mine away.

How sure is fairly sure? It varies, and everyone can set their own standard. In fact, the hypothesis can always be improved by the growth of the dataset, or by including more variables into the analysis, or simply more iterations of refinement of the hypothesis from surveys.

Some might say that data mining is identifying the relationship between different variables in the data (regression), but I say that it is incomplete. So what if you know that everyone who lives in this city either drive or walk to work? Is it because there are no public transportation? is it because drivings are cheaper alternative? what about biking? Without answering these questions, knowing the relationship between variables simply doesn’t offer much value.


This blog will focus mainly on real-world application of data mining problems and will offer some suggestions of how to solve them by using some of today’s technology.

Leave a comment

Filed under General, Theory

Useful SQL Queries to Retrieve Daterange Dates

Most of us who have worked with SQL knows of GetDate(), it returns the timestamp of the very moment it executed. Combine that with dateadd function and date calculations become much more powerful.

dateadd can take a date, and pick a part of a date (like day, month, year) and do arithmetics to it in order to get another desired date.

Example: dateadd(dd, -1, getDate()) returns yesterday, dateadd(yy, 1, getDate()) returns the same day of next year.

Last week I need to make a report that returns result of last week, last month and year to date. To make things complicated, last week doesn’t mean 7 days ago, but actually Monday last week to Sunday last week. Same with last month (1st of Sept to 30th of Sept if the report is run today), to fetch the fromDate and the toDate for those case scenarios, we need to combine dateadd with datediff:

Datediff returns the integer difference for the 2 specified dates and the datepart specified.

Example: datediff(dd, getDate(), dateadd(dd, 1, getDate())) will return 1

Now, with both datediff and dateadd in action:

Dateadd(wk, Datediff(wk, 6, getDate())-1, 7) returns the first date of last week at 0 hour, 0 minute, 0.000 seconds

Dateadd(month, Datediff(month, 0, getDate())-1, 0) returns the first date of last month at 0 hour, 0 minute, 0.000 seconds

Dateadd(year, Datediff(year, 0, getDate())-1, 0) returns the first date of last year at 0 hour, 0 minute, and 0.000 seconds

From there we can be lazy about calculating the last date of last week:

Dateadd(wk, Datediff(wk, 6, getDate()), 7) actually returns the first date of this week at 0 hour, 0 minute, 0.000 seconds, if we run a query with ‘<‘ instead of ‘<=’ then this will work just fine.

To complete the queries:

Dateadd(month, Datediff(month, 0, getDate()), 0) returns the first date of this month at 0 hour, 0 minute, 0.000 seconds

Dateadd(year, Datediff(year, 0, getDate()), 0) returns the first date of this year at 0 hour, 0 minute, and 0.000 seconds

Leave a comment

Filed under Technical, TSQL