{"id":422,"date":"2025-10-09T17:32:50","date_gmt":"2025-10-09T21:32:50","guid":{"rendered":"http:\/\/stephendavies.org\/nlp\/?p=422"},"modified":"2025-10-09T17:32:50","modified_gmt":"2025-10-09T21:32:50","slug":"todays-code-posted","status":"publish","type":"post","link":"http:\/\/stephendavies.org\/nlp\/index.php\/2025\/10\/09\/todays-code-posted\/","title":{"rendered":"Today&#8217;s code posted"},"content":{"rendered":"<p>I have pushed to <a href=\"https:\/\/github.com\/divilian\/data470\">the class github repo<\/a> our code from today (see the file <a href=\"https:\/\/github.com\/divilian\/data470\/blob\/main\/demo_autodiff.py\"><tt>demo_autodiff.py<\/tt><\/a>.)<\/p>\n<p>Btw, I may have completely forgotten to mention the name of the awesome algorithm used to systematically back-compute the partial derivatives of the loss function with respect to all the model inputs. It is called <b>autodiff<\/b>. In a humorous twist, the people at Meta who developed PyTorch apparently misheard the name and thought it was &#8220;autograd&#8221; (which makes sense, actually, since the gradient is precisely the vector containing all those partial derivatives) and so you will see references throughout the PyTorch docs to &#8220;autograd.&#8221; I prefer to use the original name.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I have pushed to the class github repo our code from today (see the file demo_autodiff.py.) Btw, I may have completely forgotten to mention the name of the awesome algorithm used to systematically back-compute the partial derivatives of the loss function with respect to all the model inputs. It is called autodiff. In a humorous [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":"","_links_to":"","_links_to_target":""},"categories":[1],"tags":[],"class_list":["post-422","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"http:\/\/stephendavies.org\/nlp\/index.php\/wp-json\/wp\/v2\/posts\/422","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/stephendavies.org\/nlp\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/stephendavies.org\/nlp\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/stephendavies.org\/nlp\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/stephendavies.org\/nlp\/index.php\/wp-json\/wp\/v2\/comments?post=422"}],"version-history":[{"count":1,"href":"http:\/\/stephendavies.org\/nlp\/index.php\/wp-json\/wp\/v2\/posts\/422\/revisions"}],"predecessor-version":[{"id":423,"href":"http:\/\/stephendavies.org\/nlp\/index.php\/wp-json\/wp\/v2\/posts\/422\/revisions\/423"}],"wp:attachment":[{"href":"http:\/\/stephendavies.org\/nlp\/index.php\/wp-json\/wp\/v2\/media?parent=422"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/stephendavies.org\/nlp\/index.php\/wp-json\/wp\/v2\/categories?post=422"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/stephendavies.org\/nlp\/index.php\/wp-json\/wp\/v2\/tags?post=422"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}