{"id":3063,"date":"2023-04-21T07:57:30","date_gmt":"2023-04-21T07:57:30","guid":{"rendered":"https:\/\/pickl.ai\/blog\/?p=3063"},"modified":"2025-02-19T06:42:19","modified_gmt":"2025-02-19T06:42:19","slug":"l1-and-l2-regularization-in-machine-learning","status":"publish","type":"post","link":"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/","title":{"rendered":"Learn L1 and L2 Regularisation in Machine Learning"},"content":{"rendered":"<p><span style=\"font-weight: 400;\"><strong>Summary:<\/strong>\u00a0<\/span><span style=\"font-weight: 400;\">L1 and L2 Regularisation in Machine Learning prevent overfitting by adding penalty terms to model parameters. L1 Regularisation selects important features by reducing some coefficients to zero, while L2 Regularisation smooths weight distributions. Choosing the right method ensures optimal model performance, balancing complexity, generalisation, and robustness in predictive analytics.<\/span><\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#Introduction\" >Introduction<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#What_is_Regularisation_in_Machine_Learning\" >What is Regularisation in Machine Learning?<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#What_is_L1_Regularisation\" >What is L1 Regularisation?<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#What_is_L2_Regularisation\" >What is L2 Regularisation?<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#Differences_Between_L1_and_L2_Regularisation\" >Differences Between L1 and L2 Regularisation<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#L1_Regularisation\" >L1 Regularisation<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#L2_Regularisation\" >L2 Regularisation<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-8\" href=\"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#Tabular_Representation_of_Key_Differences_between_L1_and_L2\" >Tabular Representation of Key Differences between L1 and L2<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-9\" href=\"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#Practical_Applications_of_L1_and_L2_Regularisation\" >Practical Applications of L1 and L2 Regularisation<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-10\" href=\"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#When_to_Use_L1_Regularisation\" >When to Use L1 Regularisation<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-11\" href=\"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#When_to_Use_L2_Regularisation\" >When to Use L2 Regularisation<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-12\" href=\"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#Conclusion\" >Conclusion<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-13\" href=\"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#Frequently_Asked_Questions\" >Frequently Asked Questions<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-14\" href=\"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#What_is_the_Difference_Between_L1_and_L2_Regularisation_in_Machine_Learning\" >What is the Difference Between L1 and L2 Regularisation in Machine Learning?<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-15\" href=\"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#When_Should_You_Use_L1_Regularisation_in_Machine_Learning\" >When Should You Use L1 Regularisation in Machine Learning?<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-16\" href=\"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#How_Does_L2_Regularisation_Improve_Model_Performance\" >How Does L2 Regularisation Improve Model Performance?<\/a><\/li><\/ul><\/li><\/ul><\/nav><\/div>\n<h2 id=\"introduction\"><span class=\"ez-toc-section\" id=\"Introduction\"><\/span><b>Introduction<\/b><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Machine Learning enables computers to learn from <\/span><a href=\"https:\/\/pickl.ai\/blog\/difference-between-data-and-information\/\"><span style=\"font-weight: 400;\">data<\/span><\/a><span style=\"font-weight: 400;\"> without explicit programming, revolutionising industries with its ability to analyse patterns and make intelligent decisions.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">According to Fortune Business Insights, the global Machine Learning market, valued at $15.44 billion in 2021, is projected to reach $209.91 billion by 2029, growing at a remarkable <\/span><span style=\"font-weight: 400;\">CAGR of 38.8%<\/span><span style=\"font-weight: 400;\">.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">However, a key challenge in <\/span><a href=\"https:\/\/pickl.ai\/blog\/what-is-machine-learning\/\"><span style=\"font-weight: 400;\">Machine Learning<\/span><\/a><span style=\"font-weight: 400;\"> is overfitting. In overfitting, models memorise data instead of generalising well. Regularisation addresses this issue by adding a penalty to the loss function, improving model performance.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">This blog explores L1 and L2 Regularisation. By reading this blog, you\u2019ll understand the differences between these two techniques and their role in optimising Machine Learning models.<\/span><\/p>\n<p><b>Key Takeaways<\/b><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">L1 Regularisation (Lasso) eliminates less important features by shrinking some coefficients to zero.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">L2 Regularisation (Ridge) ensures smaller, non-zero weights, improving model generalisation.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">L1 is ideal for feature selection, while L2 is useful when all features are relevant.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Both methods prevent overfitting, enhancing predictive accuracy.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Choosing the right regularisation depends on dataset complexity and learning objectives.<\/span><\/li>\n<\/ul>\n<h2 id=\"what-is-regularisation-in-machine-learning\"><span class=\"ez-toc-section\" id=\"What_is_Regularisation_in_Machine_Learning\"><\/span><b>What is Regularisation in Machine Learning?<\/b><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Regularisation<\/span><span style=\"font-weight: 400;\"> is the <\/span><a href=\"https:\/\/pickl.ai\/blog\/regularization-in-machine-learning\/\"><span style=\"font-weight: 400;\">approach in Machine Learning<\/span><\/a><span style=\"font-weight: 400;\"> that prevents overfitting by ensuring that a penalty term is included within the model\u2019s function. There are two main objectives of Regularisation include-<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">To reduce the complexity of a model.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">To improve the ability of the model to generalise new inputs.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Numerous Regularisation methods add different penalty terms, including L1 and L2 Regularisation. While L2 Regularisation is a punishment term based on the squares of the given parameters, L1 is a penalty term for absolute values of the model\u2019s parameters.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Certainly, Regularisation reduces the chances of overfitting and controls the model\u2019s parameters. Therefore, it helps enhance the model\u2019s performance on untested data.<\/span><\/p>\n<h2 id=\"what-is-l1-regularisation\"><span class=\"ez-toc-section\" id=\"What_is_L1_Regularisation\"><\/span><b>What is L1 Regularisation?<\/b><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400;\">L1 Regularisation, or <\/span><a href=\"https:\/\/pickl.ai\/blog\/lasso-regression\/\"><span style=\"font-weight: 400;\">Lasso Regularisation<\/span><\/a><span style=\"font-weight: 400;\">, is a Machine Learning strategy that inhibits overfitting by introducing a penalty term in the model\u2019s loss function. The penalty term is based on the absolute values of the model\u2019s parameters.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">L1 Regularisation tends to reduce the parameters of some models to zero to lower the number of non-zero parameters in the model.<\/span><\/p>\n<p><img fetchpriority=\"high\" decoding=\"async\" class=\"alignnone size-full wp-image-19915\" src=\"https:\/\/pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image4-2.jpg\" alt=\"Lasso (L1) Regression\" width=\"1600\" height=\"1066\" srcset=\"https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image4-2.jpg 1600w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image4-2-300x200.jpg 300w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image4-2-1024x682.jpg 1024w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image4-2-768x512.jpg 768w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image4-2-1536x1023.jpg 1536w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image4-2-110x73.jpg 110w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image4-2-200x133.jpg 200w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image4-2-380x253.jpg 380w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image4-2-255x170.jpg 255w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image4-2-550x366.jpg 550w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image4-2-800x533.jpg 800w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image4-2-1160x773.jpg 1160w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image4-2-150x100.jpg 150w\" sizes=\"(max-width: 1600px) 100vw, 1600px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">L1 Regularisation is useful when working with high-dimensional data. It enables you to choose a subset of the most essential attributes. Furthermore, it helps reduce the <\/span><a href=\"https:\/\/pickl.ai\/blog\/difference-between-underfitting-and-overfitting\/\"><span style=\"font-weight: 400;\">risk of overfitting<\/span><\/a><span style=\"font-weight: 400;\"> and makes the model easier to understand. <\/span><a href=\"https:\/\/pickl.ai\/blog\/hyperparameters-in-machine-learning\/\"><span style=\"font-weight: 400;\">Hyperparameter lambda<\/span><\/a><span style=\"font-weight: 400;\"> regulates the strength of L1 Regularisation by controlling the size of the penalty term.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Thus, improvement in Regularisation occurs when lambda rises, and the parameters are reduced to zero.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The L1 Regularisation formula is given below:\u00a0<\/span><\/p>\n<p><img decoding=\"async\" class=\"alignnone size-full wp-image-19916\" src=\"https:\/\/pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image5.png\" alt=\" L1 Regularisation formula.\" width=\"544\" height=\"115\" srcset=\"https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image5.png 544w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image5-300x63.png 300w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image5-110x23.png 110w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image5-200x42.png 200w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image5-380x80.png 380w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image5-255x54.png 255w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image5-150x32.png 150w\" sizes=\"(max-width: 544px) 100vw, 544px\" \/><\/p>\n<h2 id=\"what-is-l2-regularisation\"><span class=\"ez-toc-section\" id=\"What_is_L2_Regularisation\"><\/span><b>What is L2 Regularisation?<\/b><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400;\">L2 Regularisation, also known as <\/span><a href=\"https:\/\/pickl.ai\/blog\/understanding-ridge-regression-in-machine-learning\/\"><span style=\"font-weight: 400;\">Ridge<\/span><\/a><span style=\"font-weight: 400;\"> Regularisation, is an approach in Machine Learning. It avoids overfitting by executing penalty terms in the model\u2019s loss functions on the squares of the model&#8217;s parameters. The primary goal of L2 Regularisation is to ensure that the model&#8217;s parameters have short sizes and prevent oversizing.<\/span><\/p>\n<p><img decoding=\"async\" class=\"alignnone size-full wp-image-19918\" src=\"https:\/\/pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image2-13.jpg\" alt=\"Ridge (L2) Regression.\n\" width=\"1600\" height=\"1066\" srcset=\"https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image2-13.jpg 1600w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image2-13-300x200.jpg 300w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image2-13-1024x682.jpg 1024w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image2-13-768x512.jpg 768w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image2-13-1536x1023.jpg 1536w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image2-13-110x73.jpg 110w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image2-13-200x133.jpg 200w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image2-13-380x253.jpg 380w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image2-13-255x170.jpg 255w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image2-13-550x366.jpg 550w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image2-13-800x533.jpg 800w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image2-13-1160x773.jpg 1160w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image2-13-150x100.jpg 150w\" sizes=\"(max-width: 1600px) 100vw, 1600px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">For L2 Regularisation, the term proportionate to the squares of the model\u2019s parameters is added to the loss function. It works by limiting the size of the parameters and preventing them from growing out of control. The hyperparameter lambda, which controls the Regularisation\u2019s intensity, also ensures the size of the penalty term is controlled. Hence, the parameters will be smaller, and the Regularisation will be stronger with the greater lambda.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The L2 Regularisation formula is given below:\u00a0<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-19919\" src=\"https:\/\/pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image1.png\" alt=\" L2 Regularisation formula.\n\" width=\"542\" height=\"116\" srcset=\"https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image1.png 542w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image1-300x64.png 300w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image1-110x24.png 110w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image1-200x43.png 200w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image1-380x81.png 380w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image1-255x55.png 255w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image1-150x32.png 150w\" sizes=\"(max-width: 542px) 100vw, 542px\" \/><\/p>\n<h2 id=\"differences-between-l1-and-l2-regularisation\"><span class=\"ez-toc-section\" id=\"Differences_Between_L1_and_L2_Regularisation\"><\/span><b>Differences Between L1 and L2 Regularisation<\/b><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400;\">While L1 and L2 Regularisation aims to mitigate overfitting by adding a penalty to the model&#8217;s parameters, they do so differently, leading to unique impacts on the model&#8217;s performance and structure. Understanding these differences is crucial for selecting the appropriate Regularisation technique based on the specific requirements of your Machine Learning task.<\/span><\/p>\n<h3 id=\"l1-regularisation\"><span class=\"ez-toc-section\" id=\"L1_Regularisation\"><\/span><b>L1 Regularisation<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><span style=\"font-weight: 400;\">L1 Regularisation, also known as Lasso (Least Absolute Shrinkage and Selection Operator), is characterised by adding the absolute values of the model parameters as a penalty term. This approach has several distinctive features and implications:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Penalty Term<\/b><span style=\"font-weight: 400;\">: Based on the absolute values of the model parameters.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Sparse Solutions<\/b><span style=\"font-weight: 400;\">: Some parameters are reduced to zero, producing sparse solutions.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Sensitivity to Outliers<\/b><span style=\"font-weight: 400;\">: More sensitive to outliers compared to L2 Regularisation.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Feature Selection<\/b><span style=\"font-weight: 400;\">: Selects a subset of the most crucial features, effectively performing feature selection.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Non-convex Optimisation<\/b><span style=\"font-weight: 400;\">: Typically involves non-convex optimisation, which can be more challenging.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Insensitive to Correlated Features<\/b><span style=\"font-weight: 400;\">: The penalty term is less sensitive to correlated features.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Dimensional Data<\/b><span style=\"font-weight: 400;\">: Useful when dealing with dimensional data, especially in high-dimensional spaces.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Also Known As<\/b><span style=\"font-weight: 400;\">: Commonly referred to as Lasso Regularisation.<\/span><\/li>\n<\/ul>\n<h3 id=\"l2-regularisation\"><span class=\"ez-toc-section\" id=\"L2_Regularisation\"><\/span><b>L2 Regularisation<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><span style=\"font-weight: 400;\">L2 Regularisation, also known as Ridge Regularisation, involves adding the squares of the model parameters as a penalty term. This technique offers a different set of advantages and characteristics:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Penalty Term<\/b><span style=\"font-weight: 400;\">: Based on the squares of the model parameters.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Non-sparse Solutions<\/b><span style=\"font-weight: 400;\">: Uses all the parameters, producing non-sparse solutions.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Robustness to Outliers<\/b><span style=\"font-weight: 400;\">: More robust to outliers than L1 Regularisation.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Feature Utilisation<\/b><span style=\"font-weight: 400;\">: All features are helpful for the model, contributing to the final predictions.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Convex Optimisation<\/b><span style=\"font-weight: 400;\">: Involves convex optimisation, which is easier to solve compared to non-convex optimisation.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Sensitivity to Correlated Features<\/b><span style=\"font-weight: 400;\">: The penalty term is highly sensitive to correlated features, affecting model performance.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>High-Dimensional Data<\/b><span style=\"font-weight: 400;\">: Useful for high-dimensional data and when the goal is to have a less complex model.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Also Known As<\/b><span style=\"font-weight: 400;\">: Commonly referred to as Ridge Regularisation.<\/span><\/li>\n<\/ul>\n<h3 id=\"tabular-representation-of-key-differences-between-l1-and-l2\"><span class=\"ez-toc-section\" id=\"Tabular_Representation_of_Key_Differences_between_L1_and_L2\"><\/span><b>Tabular Representation of Key Differences between L1 and L2<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><span style=\"font-weight: 400;\">As you know, understanding the critical differences between L1 and L2 Regularisation helps choose the proper method for specific Machine Learning problems. For your clear understanding, let\u2019s look at a tabular representation of the critical differences between L1 and L2 Regularisation methods.<br \/>\n<img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-19920\" src=\"https:\/\/pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image3-1.png\" alt=\"the differences between L1 and L2 Regularisation.\n\" width=\"957\" height=\"803\" srcset=\"https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image3-1.png 957w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image3-1-300x252.png 300w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image3-1-768x644.png 768w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image3-1-110x92.png 110w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image3-1-200x168.png 200w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image3-1-380x319.png 380w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image3-1-255x214.png 255w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image3-1-550x461.png 550w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image3-1-800x671.png 800w, https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image3-1-150x126.png 150w\" sizes=\"(max-width: 957px) 100vw, 957px\" \/><\/span><\/p>\n<h2 id=\"practical-applications-of-l1-and-l2-regularisation\"><span class=\"ez-toc-section\" id=\"Practical_Applications_of_L1_and_L2_Regularisation\"><\/span><b>Practical Applications of L1 and L2 Regularisation<\/b><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400;\">The choice between L1 and L2 Regularisation hinges on the specific characteristics and requirements of the Machine Learning problem. Each method has unique advantages and is suited to different scenarios. Here are some practical considerations to guide the selection process:<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The choice between L1 and L2 Regularisation depends on the specific characteristics and requirements of the Machine Learning problem. Here are some practical considerations for each technique:<\/span><\/p>\n<h3 id=\"when-to-use-l1-regularisation\"><span class=\"ez-toc-section\" id=\"When_to_Use_L1_Regularisation\"><\/span><b>When to Use L1 Regularisation<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><span style=\"font-weight: 400;\">L1 Regularisation is ideal for scenarios where feature selection and sparsity are essential. Here are some specific cases where L1 Regularisation excels:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Feature Selection<\/b><span style=\"font-weight: 400;\">: L1 Regularisation is highly effective when identifying and retaining essential features. It reduces the number of features by setting <\/span><span style=\"font-weight: 400;\">some coefficients to zero<\/span><span style=\"font-weight: 400;\">, effectively performing feature selection.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Dimensional Data<\/b><span style=\"font-weight: 400;\">: When dealing with datasets with many features, L1 Regularisation can help manage dimensionality. Producing sparse solutions simplifies the model and makes it easier to interpret.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Irrelevant or Redundant Features<\/b><span style=\"font-weight: 400;\">: If you suspect that a subset of the features in your dataset is irrelevant or redundant, L1 Regularisation can help. It tends to shrink the coefficients of less important features to zero, thus removing them from the model and improving performance.<\/span><\/li>\n<\/ul>\n<h3 id=\"when-to-use-l2-regularisation\"><span class=\"ez-toc-section\" id=\"When_to_Use_L2_Regularisation\"><\/span><b>When to Use L2 Regularisation<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><span style=\"font-weight: 400;\">L2 Regularisation is suitable for scenarios where the use of all features and robustness to outliers are crucial. Here are some specific cases where L2 Regularisation is beneficial:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>All Features Relevant<\/b><span style=\"font-weight: 400;\">: When you believe all features in your dataset contribute to the outcome, L2 Regularisation is appropriate. Unlike L1, it does not shrink any coefficients to zero, ensuring that all features are included in the model.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Robustness to Outliers<\/b><span style=\"font-weight: 400;\">: L2 Regularisation is more robust to outliers than L1. If your dataset contains outliers that could significantly influence the model, L2 Regularisation can help mitigate their impact, leading to more stable and reliable predictions.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>High-Dimensional Data and Less Complex Models<\/b><span style=\"font-weight: 400;\">: When dealing with <\/span><span style=\"font-weight: 400;\">high-dimen<\/span><span style=\"font-weight: 400;\">s<\/span><span style=\"font-weight: 400;\">ional data<\/span><span style=\"font-weight: 400;\"> where the goal is to reduce model complexity, L2 Regularisation is useful. It helps to smooth the coefficients, resulting in a simpler and more generalisable model.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">After carefully considering your Machine Learning problem&#8217;s specific needs, you can choose between L1 and L2 Regularisation to achieve optimal performance and model interpretability.<\/span><\/p>\n<h2 id=\"conclusion\"><span class=\"ez-toc-section\" id=\"Conclusion\"><\/span><b>Conclusion<\/b><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400;\">L1 and L2 Regularisation in Machine Learning prevent overfitting and improve model performance. L1 Regularisation enhances feature selection by shrinking some coefficients to zero, making it ideal for high-dimensional data.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00a0L2 Regularisation, on the other hand, ensures smoother weight distributions, making models more robust. Choosing the proper regularisation method depends on the specific dataset and learning objective.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">While L1 is best for sparsity and feature selection, L2 helps when all features contribute to predictions. Understanding their differences allows data scientists to build more efficient, generalisable models that perform well on unseen data.<\/span><\/p>\n<h2 id=\"frequently-asked-questions\"><span class=\"ez-toc-section\" id=\"Frequently_Asked_Questions\"><\/span><b>Frequently Asked Questions<\/b><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<h3 id=\"what-is-the-difference-between-l1-and-l2-regularisation-in-machine-learning\"><span class=\"ez-toc-section\" id=\"What_is_the_Difference_Between_L1_and_L2_Regularisation_in_Machine_Learning\"><\/span><b>What is the Difference Between L1 and L2 Regularisation in Machine Learning?<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><span style=\"font-weight: 400;\">L1 Regularisation (Lasso) penalises absolute parameter values, leading to sparse solutions and feature selection. L2 Regularisation (Ridge) penalises squared parameter values, ensuring smaller, non-zero weights and reducing overfitting while maintaining all features in the model.<\/span><\/p>\n<h3 id=\"when-should-you-use-l1-regularisation-in-machine-learning\"><span class=\"ez-toc-section\" id=\"When_Should_You_Use_L1_Regularisation_in_Machine_Learning\"><\/span><b>When Should You Use L1 Regularisation in Machine Learning?<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><span style=\"font-weight: 400;\">Use L1 Regularisation when feature selection is necessary, especially for high-dimensional datasets. It eliminates irrelevant features by setting some coefficients to zero, simplifying the model and improving interpretability. L1 is ideal when dealing with redundant or unnecessary variables.<\/span><\/p>\n<h3 id=\"how-does-l2-regularisation-improve-model-performance\"><span class=\"ez-toc-section\" id=\"How_Does_L2_Regularisation_Improve_Model_Performance\"><\/span><b>How Does L2 Regularisation Improve Model Performance?<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><span style=\"font-weight: 400;\">L2 Regularisation prevents overfitting by adding a squared penalty to large model parameters, ensuring smoother weight distributions. It helps when all features contribute to predictions and improve the model\u2019s generalisation by reducing variance, leading to better performance on unseen data.<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"L1 and L2 Regularisation in Machine Learning reduces overfitting and enhances model performance effectively.\n","protected":false},"author":9,"featured_media":13164,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[2],"tags":[913,912,916,917,918,919,914,915],"ppma_author":[2170,2607],"class_list":{"0":"post-3063","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-machine-learning","8":"tag-differences-between-l1-and-l2-regularization","9":"tag-l1-and-l2-regularization-in-machine-learning","10":"tag-l1-regularization-formula","11":"tag-l2-regularization-formula","12":"tag-lasso-regression","13":"tag-ridge-regression","14":"tag-what-is-l1-regularization","15":"tag-what-is-l2-regularization"},"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v20.3 (Yoast SEO v27.3) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>L1 and L2 Regularisation in Machine Learning<\/title>\n<meta name=\"description\" content=\"Learn L1 and L2 Regularisation in Machine Learning, their differences, use cases, and how they prevent overfitting to improve model performance.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Learn L1 and L2 Regularisation in Machine Learning\" \/>\n<meta property=\"og:description\" content=\"Learn L1 and L2 Regularisation in Machine Learning, their differences, use cases, and how they prevent overfitting to improve model performance.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/\" \/>\n<meta property=\"og:site_name\" content=\"Pickl.AI\" \/>\n<meta property=\"article:published_time\" content=\"2023-04-21T07:57:30+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-02-19T06:42:19+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image1-12.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1200\" \/>\n\t<meta property=\"og:image:height\" content=\"628\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Asmita Kar, Hardik Agrawal\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Asmita Kar\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/l1-and-l2-regularization-in-machine-learning\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/l1-and-l2-regularization-in-machine-learning\\\/\"},\"author\":{\"name\":\"Asmita Kar\",\"@id\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/#\\\/schema\\\/person\\\/deb3008b208be14f6776365a3e3bdbf9\"},\"headline\":\"Learn L1 and L2 Regularisation in Machine Learning\",\"datePublished\":\"2023-04-21T07:57:30+00:00\",\"dateModified\":\"2025-02-19T06:42:19+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/l1-and-l2-regularization-in-machine-learning\\\/\"},\"wordCount\":1604,\"image\":{\"@id\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/l1-and-l2-regularization-in-machine-learning\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/wp-content\\\/uploads\\\/2023\\\/04\\\/image1-12.jpg\",\"keywords\":[\"differences between L1 and L2 regularization\",\"L1 and L2 Regularization in Machine Learning\",\"l1 regularization formula\",\"l2 regularization formula\",\"Lasso Regression\",\"Ridge Regression\",\"What is L1 Regularization\",\"What is L2 regularization\"],\"articleSection\":[\"Machine Learning\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/l1-and-l2-regularization-in-machine-learning\\\/\",\"url\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/l1-and-l2-regularization-in-machine-learning\\\/\",\"name\":\"L1 and L2 Regularisation in Machine Learning\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/l1-and-l2-regularization-in-machine-learning\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/l1-and-l2-regularization-in-machine-learning\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/wp-content\\\/uploads\\\/2023\\\/04\\\/image1-12.jpg\",\"datePublished\":\"2023-04-21T07:57:30+00:00\",\"dateModified\":\"2025-02-19T06:42:19+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/#\\\/schema\\\/person\\\/deb3008b208be14f6776365a3e3bdbf9\"},\"description\":\"Learn L1 and L2 Regularisation in Machine Learning, their differences, use cases, and how they prevent overfitting to improve model performance.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/l1-and-l2-regularization-in-machine-learning\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/l1-and-l2-regularization-in-machine-learning\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/l1-and-l2-regularization-in-machine-learning\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/wp-content\\\/uploads\\\/2023\\\/04\\\/image1-12.jpg\",\"contentUrl\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/wp-content\\\/uploads\\\/2023\\\/04\\\/image1-12.jpg\",\"width\":1200,\"height\":628,\"caption\":\"Regularisation\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/l1-and-l2-regularization-in-machine-learning\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Machine Learning\",\"item\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/category\\\/machine-learning\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Learn L1 and L2 Regularisation in Machine Learning\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/\",\"name\":\"Pickl.AI\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/#\\\/schema\\\/person\\\/deb3008b208be14f6776365a3e3bdbf9\",\"name\":\"Asmita Kar\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/pickl.ai\\\/blog\\\/wp-content\\\/uploads\\\/2022\\\/10\\\/avatar_user_9_1665051800-96x96.jpg5d1d3dbab09efb0bbc94498e4de47251\",\"url\":\"https:\\\/\\\/pickl.ai\\\/blog\\\/wp-content\\\/uploads\\\/2022\\\/10\\\/avatar_user_9_1665051800-96x96.jpg\",\"contentUrl\":\"https:\\\/\\\/pickl.ai\\\/blog\\\/wp-content\\\/uploads\\\/2022\\\/10\\\/avatar_user_9_1665051800-96x96.jpg\",\"caption\":\"Asmita Kar\"},\"description\":\"I am a Senior Content Writer working with Pickl.AI. I am a passionate writer, an ardent learner and a dedicated individual. With around 3years of experience in writing, I have developed the knack of using words with a creative flow. Writing motivates me to conduct research and inspires me to intertwine words that are able to lure my audience in reading my work. My biggest motivation in life is my mother who constantly pushes me to do better in life. Apart from writing, Indian Mythology is my area of passion about which I am constantly on the path of learning more.\",\"url\":\"https:\\\/\\\/www.pickl.ai\\\/blog\\\/author\\\/asmitakar\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"L1 and L2 Regularisation in Machine Learning","description":"Learn L1 and L2 Regularisation in Machine Learning, their differences, use cases, and how they prevent overfitting to improve model performance.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/","og_locale":"en_US","og_type":"article","og_title":"Learn L1 and L2 Regularisation in Machine Learning","og_description":"Learn L1 and L2 Regularisation in Machine Learning, their differences, use cases, and how they prevent overfitting to improve model performance.","og_url":"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/","og_site_name":"Pickl.AI","article_published_time":"2023-04-21T07:57:30+00:00","article_modified_time":"2025-02-19T06:42:19+00:00","og_image":[{"width":1200,"height":628,"url":"https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image1-12.jpg","type":"image\/jpeg"}],"author":"Asmita Kar, Hardik Agrawal","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Asmita Kar","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#article","isPartOf":{"@id":"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/"},"author":{"name":"Asmita Kar","@id":"https:\/\/www.pickl.ai\/blog\/#\/schema\/person\/deb3008b208be14f6776365a3e3bdbf9"},"headline":"Learn L1 and L2 Regularisation in Machine Learning","datePublished":"2023-04-21T07:57:30+00:00","dateModified":"2025-02-19T06:42:19+00:00","mainEntityOfPage":{"@id":"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/"},"wordCount":1604,"image":{"@id":"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#primaryimage"},"thumbnailUrl":"https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image1-12.jpg","keywords":["differences between L1 and L2 regularization","L1 and L2 Regularization in Machine Learning","l1 regularization formula","l2 regularization formula","Lasso Regression","Ridge Regression","What is L1 Regularization","What is L2 regularization"],"articleSection":["Machine Learning"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/","url":"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/","name":"L1 and L2 Regularisation in Machine Learning","isPartOf":{"@id":"https:\/\/www.pickl.ai\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#primaryimage"},"image":{"@id":"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#primaryimage"},"thumbnailUrl":"https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image1-12.jpg","datePublished":"2023-04-21T07:57:30+00:00","dateModified":"2025-02-19T06:42:19+00:00","author":{"@id":"https:\/\/www.pickl.ai\/blog\/#\/schema\/person\/deb3008b208be14f6776365a3e3bdbf9"},"description":"Learn L1 and L2 Regularisation in Machine Learning, their differences, use cases, and how they prevent overfitting to improve model performance.","breadcrumb":{"@id":"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#primaryimage","url":"https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image1-12.jpg","contentUrl":"https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image1-12.jpg","width":1200,"height":628,"caption":"Regularisation"},{"@type":"BreadcrumbList","@id":"https:\/\/www.pickl.ai\/blog\/l1-and-l2-regularization-in-machine-learning\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.pickl.ai\/blog\/"},{"@type":"ListItem","position":2,"name":"Machine Learning","item":"https:\/\/www.pickl.ai\/blog\/category\/machine-learning\/"},{"@type":"ListItem","position":3,"name":"Learn L1 and L2 Regularisation in Machine Learning"}]},{"@type":"WebSite","@id":"https:\/\/www.pickl.ai\/blog\/#website","url":"https:\/\/www.pickl.ai\/blog\/","name":"Pickl.AI","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.pickl.ai\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.pickl.ai\/blog\/#\/schema\/person\/deb3008b208be14f6776365a3e3bdbf9","name":"Asmita Kar","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/pickl.ai\/blog\/wp-content\/uploads\/2022\/10\/avatar_user_9_1665051800-96x96.jpg5d1d3dbab09efb0bbc94498e4de47251","url":"https:\/\/pickl.ai\/blog\/wp-content\/uploads\/2022\/10\/avatar_user_9_1665051800-96x96.jpg","contentUrl":"https:\/\/pickl.ai\/blog\/wp-content\/uploads\/2022\/10\/avatar_user_9_1665051800-96x96.jpg","caption":"Asmita Kar"},"description":"I am a Senior Content Writer working with Pickl.AI. I am a passionate writer, an ardent learner and a dedicated individual. With around 3years of experience in writing, I have developed the knack of using words with a creative flow. Writing motivates me to conduct research and inspires me to intertwine words that are able to lure my audience in reading my work. My biggest motivation in life is my mother who constantly pushes me to do better in life. Apart from writing, Indian Mythology is my area of passion about which I am constantly on the path of learning more.","url":"https:\/\/www.pickl.ai\/blog\/author\/asmitakar\/"}]}},"jetpack_featured_media_url":"https:\/\/www.pickl.ai\/blog\/wp-content\/uploads\/2023\/04\/image1-12.jpg","authors":[{"term_id":2170,"user_id":9,"is_guest":0,"slug":"asmitakar","display_name":"Asmita Kar","avatar_url":"https:\/\/pickl.ai\/blog\/wp-content\/uploads\/2022\/10\/avatar_user_9_1665051800-96x96.jpg","first_name":"Asmita","user_url":"","last_name":"Kar","description":"I am a Senior Content Writer working with Pickl.AI. I am a passionate writer, an ardent learner and a dedicated individual. With around 3years of experience in writing, I have developed the knack of using words with a creative flow. Writing motivates me to conduct research and inspires me to intertwine words that are able to lure my audience in reading my work. My biggest motivation in life is my mother who constantly pushes me to do better in life. Apart from writing, Indian Mythology is my area of passion about which I am constantly on the path of learning more."},{"term_id":2607,"user_id":45,"is_guest":0,"slug":"hardikagrawal","display_name":"Hardik Agrawal","avatar_url":"https:\/\/pickl.ai\/blog\/wp-content\/uploads\/2024\/07\/avatar_user_45_1721995960-96x96.jpeg","first_name":"Hardik","user_url":"","last_name":"Agrawal","description":"Hardik Agrawal has graduated with a B.Tech in Production and Industrial Engineering from IIT Delhi in 2024. His expertise lies in Data Science, Machine Learning, and SQL. He has hobbies like reading novels, venturing into new locations, and watching sci-fi movies."}],"_links":{"self":[{"href":"https:\/\/www.pickl.ai\/blog\/wp-json\/wp\/v2\/posts\/3063","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.pickl.ai\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.pickl.ai\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.pickl.ai\/blog\/wp-json\/wp\/v2\/users\/9"}],"replies":[{"embeddable":true,"href":"https:\/\/www.pickl.ai\/blog\/wp-json\/wp\/v2\/comments?post=3063"}],"version-history":[{"count":6,"href":"https:\/\/www.pickl.ai\/blog\/wp-json\/wp\/v2\/posts\/3063\/revisions"}],"predecessor-version":[{"id":19922,"href":"https:\/\/www.pickl.ai\/blog\/wp-json\/wp\/v2\/posts\/3063\/revisions\/19922"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.pickl.ai\/blog\/wp-json\/wp\/v2\/media\/13164"}],"wp:attachment":[{"href":"https:\/\/www.pickl.ai\/blog\/wp-json\/wp\/v2\/media?parent=3063"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.pickl.ai\/blog\/wp-json\/wp\/v2\/categories?post=3063"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.pickl.ai\/blog\/wp-json\/wp\/v2\/tags?post=3063"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.pickl.ai\/blog\/wp-json\/wp\/v2\/ppma_author?post=3063"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}