Ep5. Numerical optimization in machine learning (I): the basics

convex

During logistic regression, in order to compute the optimal parameters in the model, we have to use an iterative numerical optimization approach (Newton method or Gradient descent method, instead of a simple analytical approach). Numerical optimization is a crucial mathematical concept in machine learning and function fitting, and it is deeply integrated in model training, regularization, support vector machine, neural network, and so on. In the next few posts, I will summarize key concepts and approaches in numerical optimization, and its application in machine learning.

DIY curriculum for post-school life

post-school

Last February, I defended my PhD thesis and graduated from more than 2 decades of school life. Now, it’s been a full year of post-school life. There are no more exams and curriculum to quantify my GPA. In this post-school life, I start to realize that I have to be the one setting my own goal, designing my own curriculum, and evaluating my progress introspectively. In this post, I am sharing some lessons that I find useful in DIY curriculum. 

Do you write production code as a data scientist?

production

In the past month, I posted this question to my friends, peers, online tech forum, and got responses from more than 30 data scientists in various industries and different academic background and career path. The responses show a wide spectrum of data scientists’ involvement in production, and reveal some shared concerns about career development among data scientists.

Ep3. Goal setting for function fitting (regression)

Goal

Whenever we see the word “optimization”, the first question to ask is “what is to be optimized?” Defining an optimization goal that is meaningful and approachable is the starting point in function fitting. In this post, I will discuss goal setting for function fitting in regression. 

Ep2. Restrict function search space by assumptions

Gaussian process

In the previous post, I discussed the function fitting view of supervised learning. It is theoretically impossible to find the best fitting function from an infinite search space. In this post, I will discuss how we can restrict the search space in function fitting with assumptions.