
  <rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
      <title>markokello</title>
      <link>https://markokello.com/blog</link>
      <description></description>
      <language>en-us</language>
      <managingEditor> ()</managingEditor>
      <webMaster> ()</webMaster>
      <lastBuildDate>Mon, 20 Dec 2021 00:00:00 GMT</lastBuildDate>
      <atom:link href="https://markokello.com/feed.xml" rel="self" type="application/rss+xml"/>
      
  <item>
    <guid>https://markokello.com/blog/experimentation</guid>
    <title>Designing, Running and Analyzing A/B Testing Experiments</title>
    <link>https://markokello.com/blog/experimentation</link>
    
    <pubDate>Mon, 20 Dec 2021 00:00:00 GMT</pubDate>
    <author> ()</author>
    <category>Statistics</category><category>Product Analytics</category>
  </item>

  <item>
    <guid>https://markokello.com/blog/exceptional-data-scientist</guid>
    <title>Insights on Becoming Exceptional and Making an Impact</title>
    <link>https://markokello.com/blog/exceptional-data-scientist</link>
    <description>This post shares insights and advice on how to become an exceptional engineer. I offers new propositions from the previous interview with Zindi Africa.</description>
    <pubDate>Mon, 06 Jul 2020 00:00:00 GMT</pubDate>
    <author> ()</author>
    <category>Learning</category>
  </item>

  <item>
    <guid>https://markokello.com/blog/probability-distributions</guid>
    <title>A Sample of Probability Distributions and Their Properties</title>
    <link>https://markokello.com/blog/probability-distributions</link>
    
    <pubDate>Fri, 15 May 2020 00:00:00 GMT</pubDate>
    <author> ()</author>
    <category>ML</category><category>Statistics</category><category>Maths</category><category>Deep Learning</category>
  </item>

  <item>
    <guid>https://markokello.com/blog/sql-series-1</guid>
    <title>SQL Classical Mistakes and How Not to Shoot Yourself in the Foot</title>
    <link>https://markokello.com/blog/sql-series-1</link>
    <description>SQL is very easy but its also very easy to shoot yourself in the foot with grouping/aggregations, the HAVING clause, handling NULLs, and subqueries. I aim to highlight some of the common mistakes and how you can avoid them in this article.</description>
    <pubDate>Mon, 17 Feb 2020 00:00:00 GMT</pubDate>
    <author> ()</author>
    <category>SQL</category>
  </item>

  <item>
    <guid>https://markokello.com/blog/model-evaluation-online-metrics</guid>
    <title>Online Metrics for Models Evaluation in Production</title>
    <link>https://markokello.com/blog/model-evaluation-online-metrics</link>
    <description>Here we explore online metrics for evaluating models in production. From acquisition to revenue, we look into the AARRR framework to understand the key metrics that drive product success.</description>
    <pubDate>Sun, 26 Jan 2020 00:00:00 GMT</pubDate>
    <author> ()</author>
    <category>Product Analytics</category>
  </item>

  <item>
    <guid>https://markokello.com/blog/probability-theory</guid>
    <title>Understanding and Quantifying Uncertainties Related to Random Events</title>
    <link>https://markokello.com/blog/probability-theory</link>
    <description>TDA.</description>
    <pubDate>Tue, 17 Dec 2019 00:00:00 GMT</pubDate>
    <author> ()</author>
    <category>Maths</category><category>Statistics</category><category>ML</category>
  </item>

  <item>
    <guid>https://markokello.com/blog/gradient-descent</guid>
    <title>Gradient Descent Variants</title>
    <link>https://markokello.com/blog/gradient-descent</link>
    <description>G.D is an iterative optimization algorithm for training Machine Learning models with the primary purpose of finding the optimal parameters (weights and biases) for the model. The gradient of the loss function, also known as a vector of partial derivatives, indicates the steepest direction of increase. It is then repeatedly updated by taking a step in the opposite direction. The size of the step is controlled by the learning rate. And through this process, the algorithm gradually drives the loss value lower until it converges toward a local minimum.</description>
    <pubDate>Tue, 19 Nov 2019 00:00:00 GMT</pubDate>
    <author> ()</author>
    <category>ML</category><category>Deep Learning</category>
  </item>

  <item>
    <guid>https://markokello.com/blog/derivatives</guid>
    <title>Derivatives, Partial Derivatives, Vector and Matrix Calculus</title>
    <link>https://markokello.com/blog/derivatives</link>
    <description>.</description>
    <pubDate>Wed, 30 Oct 2019 00:00:00 GMT</pubDate>
    <author> ()</author>
    <category>ML</category><category>Maths</category><category>Deep Learning</category>
  </item>

  <item>
    <guid>https://markokello.com/blog/information-theory</guid>
    <title>Entropy, Cross-entropy, KL divergence and Beyond</title>
    <link>https://markokello.com/blog/information-theory</link>
    <description>Entropy measures the level of uncertainty or randomness in a dataset. Information gain, in turn, evaluates how effectively a decision tree split reduces this entropy. It measures the reduction in uncertainty achieved by a particular split, helping to identify which features create the most meaningful divisions in the data and lead to better classification decisions.</description>
    <pubDate>Mon, 23 Sep 2019 00:00:00 GMT</pubDate>
    <author> ()</author>
    <category>ML</category><category>Deep Learning</category>
  </item>

  <item>
    <guid>https://markokello.com/blog/Linear-algebra</guid>
    <title>Linear Algebra Concepts for Data Science and Machine Learning</title>
    <link>https://markokello.com/blog/Linear-algebra</link>
    <description>This blog explains the mathematics and theory behind key classical machine learning algorithms: Linear Regression, Logistic Regression, k-NN, Naive Bayes, and Decision Trees.</description>
    <pubDate>Wed, 28 Aug 2019 00:00:00 GMT</pubDate>
    <author> ()</author>
    <category>Maths</category><category>Statistics</category><category>Deep Learning</category>
  </item>

  <item>
    <guid>https://markokello.com/blog/numerical-integrals</guid>
    <title>Integration Techniques and Numerical Integration for Machine Learning</title>
    <link>https://markokello.com/blog/numerical-integrals</link>
    <description>Computers use numerical methods to estimate integrals because real-world data is often discrete, and general-purpose algorithms are required to handle arbitrary functions. This blog covers deterministic methods for 1D integration, such as the trapezoidal and Simpsons rules, as well as Monte Carlo methods for high-dimensional or complex domains.</description>
    <pubDate>Sat, 17 Aug 2019 00:00:00 GMT</pubDate>
    <author> ()</author>
    <category>Maths</category>
  </item>

  <item>
    <guid>https://markokello.com/blog/classical-ml</guid>
    <title>Theory Behind Some classical Machine Learning Algorithms</title>
    <link>https://markokello.com/blog/classical-ml</link>
    <description>This blog is a follow-up to a presentation I gave at Outbox Hub, Its self-contained  and explains the mathematics and theory behind key classical machine learning algorithms.</description>
    <pubDate>Tue, 06 Aug 2019 00:00:00 GMT</pubDate>
    <author> ()</author>
    <category>ML</category><category>Statistics</category>
  </item>

    </channel>
  </rss>
