Michigan Investment Network


Recent Blogs


Pitching Help Desk


Testimonials

"As soon as I put in my proposal, I got offers from four different investors within 48 hours! The funding process took less than 2 weeks. I now have the funds to partner with a digital marketing agency and scale my e-commerce business! 10/10 will certainly recommend this site to other entrepreneurs."
Asela M - Mystic Phoenix Fortress LLC

 BLOG >> Recent

Lens Models & Decision Trees [Lens Model
Posted on May 4, 2016 @ 09:49:00 AM by Paul Meagher

Two topics that I like to blog about are lens models and decision trees. Today I want to offer up suggestions for how lens models might be constructed from decision trees.

Recall that a lens model looks something like this (taken from this blog):

Recall also that a fully specified decision tree looks something like this (taken from this blog):

Notice that the decision tree includes two factors: how much nitrogen to apply (100k, 160k or 240k per acre) and quality of the growing season (poor, average, good). In the context of a lens model, these might be viewed as indicators of what the yield might be at the end of growing season. In other words, if the "intangible state" we are trying judge is the amount of corn we will get at the end of a growing season, then two critical indicators are how much nitrogen is applied and what the quality of the growing season will be like (which in turn might be indicated by the amount of rain). We have control over one of those indicators (how much nitrogen to apply) but not the other (what the weather will be like). The main point I want to make here is that it is relatively easy to convert a decision tree to a lens model by making each factor in your decision tree an indicator in your lens model.

So, not only can we use multiple linear regression to specify the indicators in our lens model, but we can also use decision tree learning algorithms (PDF link) to specify the indicators in our lens models.

I don't want to get into the technical details of how decisions tree algorithms work but in general they work by recording various "features" that are associated with a target outcome you are interested in. For example, if you want to make a decision about whether a c-section will be required to deliver a baby, you can look at all the c-section births and all the non c-section births and record standardized information about all those cases. Then you start looking for the best feature that discriminates between c-section and non c-section births. That feature will likely not be a perfect discriminator so you take all the remaining cases where you used the best feature to sort cases and then use the next best feature to discriminate between cases that require c-section births and non c-section births. If you do this you come up with the decision tree shown below which can be captured more simply in an if-then rule which is also shown below:

We can construct a lens model from this tree, or from the in-then rule, where each of the three factors is an indicator in our lens model. If we use the thickness of the line connecting the judge to the indicator to represent the strength of the relationship, the first indicator would have a thicker line than the second indicator which would be thicker than the third indicator. The first indicator captures the most variance followed by the second followed by the third. This is how algorithms that generate decision trees work so when we construct lens models based on them, we should expect them to have a certain form.

The point of this blog is to show that there are several formal techniques we might use to generate a lens model. Multiple linear regression is one previously discussed technique. Today I discussed the use of decision tree algorithms as another technique. A decision tree algorithm also suggests a plausible psychological strategy for coming up with indicators; namely, pick an indicator that accounts for most of the target cases. If there are some cases it doesn't handle, pick another indicator that might filter out the more of the cases it doesn't handle, and so on. You might not have to use many indicators before you arrive at a set of indicators that captures enough of the data to satisfy you.

Multiple linear regression and decision tree algorithms are two formal techniques you can use to make the indicators used in judgement explicit and which offer up concrete approaches to thinking about how common sense, which we often find difficult to explain, might work and be improved upon. Doctors making decisions about c-sections might have relied upon common sense which included consideration of the factors studied but the formal techniques helped to identify the relevant indicators and the overall strength of the relationship between the indicators and the need for a c-section. Where multiple regression is a more wholistic/parallel method of finding indicators, decision tree learning algorithms strike me as a more analytic/sequential method of finding judgement indicators.

Below is a lecture by machine learning guru Tom Mitchell on decision tree learning that is set to start with him discussing the c-section example.

Permalink 

 Archive 
 

Archive


 November 2023 [1]
 June 2023 [1]
 May 2023 [1]
 April 2023 [1]
 March 2023 [6]
 February 2023 [1]
 November 2022 [2]
 October 2022 [2]
 August 2022 [2]
 May 2022 [2]
 April 2022 [4]
 March 2022 [1]
 February 2022 [1]
 January 2022 [2]
 December 2021 [1]
 November 2021 [2]
 October 2021 [1]
 July 2021 [1]
 June 2021 [1]
 May 2021 [3]
 April 2021 [3]
 March 2021 [4]
 February 2021 [1]
 January 2021 [1]
 December 2020 [2]
 November 2020 [1]
 August 2020 [1]
 June 2020 [4]
 May 2020 [1]
 April 2020 [2]
 March 2020 [2]
 February 2020 [1]
 January 2020 [2]
 December 2019 [1]
 November 2019 [2]
 October 2019 [2]
 September 2019 [1]
 July 2019 [1]
 June 2019 [2]
 May 2019 [3]
 April 2019 [5]
 March 2019 [4]
 February 2019 [3]
 January 2019 [3]
 December 2018 [4]
 November 2018 [2]
 September 2018 [2]
 August 2018 [1]
 July 2018 [1]
 June 2018 [1]
 May 2018 [5]
 April 2018 [4]
 March 2018 [2]
 February 2018 [4]
 January 2018 [4]
 December 2017 [2]
 November 2017 [6]
 October 2017 [6]
 September 2017 [6]
 August 2017 [2]
 July 2017 [2]
 June 2017 [5]
 May 2017 [7]
 April 2017 [6]
 March 2017 [8]
 February 2017 [7]
 January 2017 [9]
 December 2016 [7]
 November 2016 [7]
 October 2016 [5]
 September 2016 [5]
 August 2016 [4]
 July 2016 [6]
 June 2016 [5]
 May 2016 [10]
 April 2016 [12]
 March 2016 [10]
 February 2016 [11]
 January 2016 [12]
 December 2015 [6]
 November 2015 [8]
 October 2015 [12]
 September 2015 [10]
 August 2015 [14]
 July 2015 [9]
 June 2015 [9]
 May 2015 [10]
 April 2015 [9]
 March 2015 [8]
 February 2015 [8]
 January 2015 [5]
 December 2014 [11]
 November 2014 [10]
 October 2014 [10]
 September 2014 [8]
 August 2014 [7]
 July 2014 [5]
 June 2014 [7]
 May 2014 [6]
 April 2014 [3]
 March 2014 [8]
 February 2014 [6]
 January 2014 [5]
 December 2013 [5]
 November 2013 [3]
 October 2013 [4]
 September 2013 [11]
 August 2013 [4]
 July 2013 [8]
 June 2013 [10]
 May 2013 [14]
 April 2013 [12]
 March 2013 [11]
 February 2013 [19]
 January 2013 [20]
 December 2012 [5]
 November 2012 [1]
 October 2012 [3]
 September 2012 [1]
 August 2012 [1]
 July 2012 [1]
 June 2012 [2]


Categories


 Agriculture [77]
 Bayesian Inference [14]
 Books [18]
 Business Models [24]
 Causal Inference [2]
 Creativity [7]
 Decision Making [17]
 Decision Trees [8]
 Definitions [1]
 Design [38]
 Eco-Green [4]
 Economics [14]
 Education [10]
 Energy [0]
 Entrepreneurship [74]
 Events [7]
 Farming [21]
 Finance [30]
 Future [15]
 Growth [19]
 Investing [25]
 Lean Startup [10]
 Leisure [5]
 Lens Model [9]
 Making [1]
 Management [12]
 Motivation [3]
 Nature [22]
 Patents & Trademarks [1]
 Permaculture [36]
 Psychology [2]
 Real Estate [5]
 Robots [1]
 Selling [12]
 Site News [17]
 Startups [12]
 Statistics [3]
 Systems Thinking [3]
 Trends [11]
 Useful Links [3]
 Valuation [1]
 Venture Capital [5]
 Video [2]
 Writing [2]