Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Businesses Programming Software Technology

Deep Learning Is Eating Software (petewarden.com) 147

Pete Warden, engineer and CTO of Jetpac, shares his view on how deep learning is already starting to change some of the programming is done. From a blog post, shared by a reader last week: The pattern is that there's an existing software project doing data processing using explicit programming logic, and the team charged with maintaining it find they can replace it with a deep-learning-based solution. I can only point to examples within Alphabet that we've made public, like upgrading search ranking, data center energy usage, language translation, and solving Go, but these aren't rare exceptions internally. What I see is that almost any data processing system with non-trivial logic can be improved significantly by applying modern machine learning. This might sound less than dramatic when put in those terms, but it's a radical change in how we build software. Instead of writing and maintaining intricate, layered tangles of logic, the developer has to become a teacher, a curator of training data and an analyst of results. This is very, very different than the programming I was taught in school, but what gets me most excited is that it should be far more accessible than traditional coding, once the tooling catches up. The essence of the process is providing a lot of examples of inputs, and what you expect for the outputs. This doesn't require the same technical skills as traditional programming, but it does need a deep knowledge of the problem domain. That means motivated users of the software will be able to play much more of a direct role in building it than has ever been possible. In essence, the users are writing their own user stories and feeding them into the machinery to build what they want.
This discussion has been archived. No new comments can be posted.

Deep Learning Is Eating Software

Comments Filter:
  • by Anonymous Coward

    Nom nom nom

    • by Anonymous Coward

      "how deep learning is already starting to change some of the programming is done."

      Perhaps how some of the English is done too?

      • Perhaps the AI optimised the sentence by removing the repeated "how"
        Maybe it's learnt that subtly changing the sentence in the summary leads to more clicks, as readers who notice it click on the link to see if the error is just in the summary, or the article as well.

        Or it was just written by someone with English as a second language
        Or someone who isn't very bright.
        Or it was a simple mistake.

        • One can speculate endlessly, but I notice more and more "English" on the Web that wouldn't be accepted in a primary school. Some comes from people whose first language is other than English; but many of those speak extremely correct English. A lot comes from people who were brought up and educated in Britain, the USA or other English-speaking countries.

          If I had to account for it, I could only wonder if it has been hurriedly transcribed by someone whose English is quite poor, from a sound recording of variab

        • I think the AI ate your dingo.
        • Honestly, I'm just satisfied that we can call these types of algorithms something reasonable (deep learning) instead of the incredibly misleading and far-too-broad term "AI".

  • by rsilvergun ( 571051 ) on Monday November 20, 2017 @02:14PM (#55588519)
    Deep learning's eating software and software's eating the world. We just need a few waves of Chinese needle snakes to eat Deep learning. Then gorillas to eat the snakes. Finally when wintertime rolls around, the gorillas simply freeze to death.
  • by Matt.Battey ( 1741550 ) on Monday November 20, 2017 @02:14PM (#55588523)

    Ya, I'm calling BS. Give us some concrete examples of how ML/AI/DL is doing anything other than burning CPU cycles on public clouds that drive up revenue for the cloud vendor.

    • Google translate: https://www.nature.com/news/de... [nature.com]

      • Google translate: https://www.nature.com/news/de... [nature.com]

        I think that he meant an everyday example like building better accounting software not something that would obviously benefit from deep learning like language tools.

        Language tools are an obvious use for deep learning. Especially when users/contributors can tweak the context of words and idioms that do not directly translate very well into words and may require some cultural knowledge for proper use in sentences.

        Something like accounting software would be hard to visualize using deep learning since the outc

        • I think that he meant an everyday example like building better accounting software not something that would obviously benefit from deep learning like language tools.

          That would be silly, because the article only says that problems that benefit from deep learning systems are programmed in a different way, not all programs.

        • by Anonymous Coward

          I think that he meant

          You're free to think what you wish, but Matt.Battey didn't offer your qualifications. When you find yourself having to invent a bunch of qualifications to support your argument you may have a faulty argument.

          Financial institutions have been using ML to detect fraud for years. Every large credit card transaction you make is scrutinized by AI systems to detect fraud. Insurance claims and tax returns are also being analyzed by ML systems. This is an argument from ignorance.

          • OK, so ya, I didn't give many qualifications. But implications that ML is improving coding (or at least "in a different way") needs some qualifications and examples. I'll even except that ML can optimize algorithms and "improve page rankings." BUT, it needs some pretty good examples and boundaries. Warden is trying to push us away from thinking his post is a "deep learning hype" piece, but that's exactly what it is.

            I mean he says "What I’m seeing is that the problem is increasingly solved by repl

            • Re:Ya right... (Score:4, Insightful)

              by Dog-Cow ( 21281 ) on Monday November 20, 2017 @03:34PM (#55589217)

              Essentially, ML can replace (parts of) systems that rely on heuristics. Anything with fixed rules, no matter how complicated the rule set, will not benefit. Why train a ML system when you can get 100% deterministic answers?

              • I've been thinking about this a good bit lately. Just last week, I was playing with a UCI data set for poker hand recognition. I used a neural net for the recognizer and tested it against the training and the test data set -- 99% accuracy!

                Seems pretty impressive, right? But, a simple, rule-based system would be 100% correct, always.

                I have a bad feeling that we're going to start seeing a lot of 99% solutions for 100% problems.

          • I think that he meant

            You're free to think what you wish, but Matt.Battey didn't offer your qualifications. When you find yourself having to invent a bunch of qualifications to support your argument you may have a faulty argument.

            Financial institutions have been using ML to detect fraud for years. Every large credit card transaction you make is scrutinized by AI systems to detect fraud. Insurance claims and tax returns are also being analyzed by ML systems. This is an argument from ignorance.

            Matt Brattely's reply was to the topic and the article. The point of the article was that ML and AI was being used to develop a large amount of software that most people wouldn't recognize as being a use case for ML or AI. It doesn't take a genius to reach a logical conclusion that while he may not have stated this qualification up front, it's implied.

            Yes, banks, etc. are probably using DL to detect fraud (another obvious use case). But that's not what I meant by Financial software. I mean things like T

        • Comment removed based on user account deletion
    • It's doing a lot of things, but it isn't replacing traditional software, it is going in different directions, mostly image, text and voice processing, driving cars, driving data-center cooling, medical (reading scans, diagnosis), financial, spam, sentiment, content/product recommendation and web search ranking.
      • It's doing a lot of things, but it isn't replacing traditional software, it is going in different directions, mostly image, text and voice processing, driving cars, driving data-center cooling,

        I find tha tdatacentre cooling one a bit odd. There's traditional CFD software that deals with such things. Maybe I didn't read the right sources, but I never saw a comparison to the engineer+CFD software school of design.

  • Alan Bradley: Some programs will be thinking soon.
    Dr. Walter Gibbs: Won't that be grand? Computers and the programs will start thinking and the people will stop.
  • Nice Advertisement (Score:5, Interesting)

    by prefec2 ( 875483 ) on Monday November 20, 2017 @02:20PM (#55588587)

    Most software today and in the coming decades is designed and developed to support business processes or data flow and execution in scientific processes. These systems need a deterministic and foreseeable behavior. Yes, you may use "learning" classification mechanisms such as neural networks to support some tasks, but this is not changing how we develop software. Especially, developing software is usual a technical and social process, as you have to understand the demands and needs of users, which require interviews and discussions with users. You also need to communicate with UI designers to develop together with users and UI designers useful and easily to understand interfaces. And yes, you have to map all this onto technology.

    • Not to mention the spate of articles showing how to destroy deep learning results by changing a few well-chosen pixels in images. You gotta have a heavily rule-based system in many cases, or in pretty much any case where "five 9s" reliability is involved.

      • by prefec2 ( 875483 )

        Absolutely. If you read their article, they downplay their claim "is eating software" to hey it has applications in data processing. Yes! What else is new? Most of these things which sail under "deep learning" now, have been available for decades and they have been used for decades for all sorts of things. All the deep learning stuff is there to classify data transforming it into information. And they still have the same issues than before, but with some more software and more processing power, we can handl

    • The article isn't saying that traditional software development is changing. What is happening is that some problems are suitable for a solution based on deep learning, and for those problems, the traditional programmer is replaced by someone specializing in configuring the neural net and training it. Pretty obvious, of course.

      • Of course if it looks like it'll save money then it'll get applied to problems which aren't suitable. There will be tears; whether they're of sadness or laughter, we know not yet.

      • by prefec2 ( 875483 )

        The aim of the article is "Deep Learning Is Eating Software" meaning it changes everything. Beside the rather aggressive term "is eating" (meaning it eradicates other approaches), this statement suggests a disruptive process which takes place. In the article itself they downplay it to data processing software there is a change away from specific logic to deep learning. While that is true in some sense, it does not have anything to do with their aim displayed in the title. In addition, they claim that deep l

    • by DontBeAMoran ( 4843879 ) on Monday November 20, 2017 @03:23PM (#55589079)

      I think it's a reflection that people always relate to what they know.

      For people who work in deep learning software, almost everything is just petabytes of data to be analyzed and classified.

      For people who work with microcontrollers, almost all of today's software is pure bloat that wastes CPU cycles and RAM.

      For people who work in security, almost all programmers are idiots.

      For people who work in design, almost everything is ugly.

      For everything else, there's MasterCard.

    • Comment removed based on user account deletion
    • by nickol ( 208154 )

      If you ever worked with marketing specialists, you know that there is nothing like "deterministic and foreseeable behavior" in marketing. In fact there are few deterministic and predictable processes in business, and most likely traditional software will remain there. However, let's take a look at a typical chain of business processes of an internet store:

      advertising - far from predictable
      SEO - not deterministic
      affecting buying habits - unpredictable and informal
      placing order - yes
      processing order - yes
      ship

  • This is marketing dribble from some guy at some company, neither of which are relevant. This is the kind of blogspamvertisement I'd expect in my inbox after a sales/marketing-oriented PM got a bug up their ass to research something outside their realm of expertise, not on /.
  • I'd be willing to speculate that the underlying sad story is one where magical black boxes, in all their imperfection, can still do better than e.g. run of the mill application software written by the majority of programmers who got their degrees in the last decade or so. Not in things like bookkeeping of course, what with tax codes and all, couldn't learn that by example if one tried -- but for genuinely nebulous things like individual preferences in conference room scheduling, or other frankly shithead jo

  • by Junta ( 36770 ) on Monday November 20, 2017 @02:34PM (#55588693)

    ML is generally enabling scenarios that were just too tedious to actually do by developer hands. Sure there are specific scenarios where developers had done the best they can (and generally failed) with hopelessly unstructured data, but for the most part those problems were just left untouched as infeasible to do manually.

    For the vast majority of software development, ML doesn't add anything. If you have no unstructured data or a way to impose structure, ML doesn't do anything over boring old programming. Even when you find yourself in one of the very chaotic, large, and diverse data sets where ML can in theory help you sort through, you have to first chew through enough data in training to get decent confidence. So you not only need a large data set, you also need to have a continued need after human assisted training has already done the work on a big chunk of that data. Even then you may be grasping for some intelligent way to apply ML techniques, because the kicker is you have to have some sort of real idea of what to do, even if you have a 'how to do it'.

    Big Data has done this same song and dance. ML is now the purported answer to 'once collected and have tools to analyze, most orgs have no idea what to do with the data'. I suggest that the orgs will still have no idea what to do with the data, and ML won't move the needle much in the wider market because the root cause is just a general lack of thoughts on what to do with the data. This is the curse of hyped adoption, the vast majority of adopters will be disappointed because it doesn't magically solve.

  • These systems will not gain the insight and expertise needed for many areas that require real-time responses. They may produce wonderful results in predicting stock market prices from historical data, but they will take far too long to be useful for the microsecond resolution trading that is done today. That takes teams of people designing new hardware and programming it, plus real-world factors such as proximity to the stock market's computers. As tech advances, the already trained, overly complex systems

    • but they will take far too long to be useful for the microsecond resolution trading that is done today

      A guy like Warren Buffett may think a year before doing a trade, and he's been pretty good at it. Not everything has to be done at microsecond level.

  • linear regression (Score:4, Interesting)

    by dmitrygr ( 736758 ) <dmitrygr@gmail.com> on Monday November 20, 2017 @02:36PM (#55588701) Homepage
    And by "deep learning" in most cases they mean "linear regression on cleaned-up data"
    • Except that the heart of deep learning is nonlinear operations. If they were linear, you wouldn't have to make them deep.

    • Re:linear regression (Score:4, Interesting)

      by serviscope_minor ( 664417 ) on Monday November 20, 2017 @04:50PM (#55590003) Journal

      No, not really, and I wouldn't call this insightful.

      Deep learning is not especially well defined, but it's not linear regression. I've seen several competing/complementary definitions.

      1. A neural net (much) greater than 3 layers deep. A sufficiently wide 3 layer net has enough capacity to run any function, so a lot of ANN learning focussed on these in the past. Turns out having lots more layers makes training much more tractable (particularly with stochastic gradient descent an batchnorm).

      2. Convolutional nets (where you're basically learning convolutional kernels, which saves a huge number of parameters compared to a normal net especially if you have many layers) with many layers.

      3. Something which learns the low-level features in the same optimization as everything else. Traditional ML algorithms were often structured as a feature extraction stage which takes the data and extracts some features in a human designed manner. Then applies ML, but if the ML can't optimize the right loss, you go for the closest loss you can, then get the thing you want with post-processing.

      A nice example would be the Viola-Jones face detector [wikipedia.org]. The features are a bunch of zero mean box filter combinations applied to an image, combined with a threshold, each one giving a different binarisation of the image. Those were hand designed. a modified Adaboost[*] is then learned to to get a good selection and weighting of the features to give a pixelwise classification of the image. You want a bounding box, so the final stage is to extract a bounding box from the binarization.

      The "problems" with that are that there's nothing to say those features are optimal, and the pixelwise loss is the wrong thing to optimize. A deep system takes pixels in and spits out a bounding box (or several). The point is you can compute derivatives with respect to all the stages so you optimize everything agains the loss you're actually interested in.

      [*] They actually use a cacade (degenerate decision tree) with a biased adaboost classifier at each node. Either way it's an ML algorithm with largely the same properties.

  • by bradley13 ( 1118935 ) on Monday November 20, 2017 @02:37PM (#55588727) Homepage

    Once upon a time, I did my doctorate in machine learning. The machines were less powerful, but the algorithms? Basically the same as they are today. Sorry, the stuff most widely in use is still the same back-propagating neural networks. The machines are just faster, so the networks can be bigger. That's it.

    Neural networks can work really well on specific problem domains. The problem is: You have no idea what they are actually learning. [theverge.com] The features that a network identifies within its layers are not really accessible to us. The problem lies, imho, in the total lack of domain knowledge. Since the network doesn't understand what the objects in those pictures are, they are doing a purely mechanical analysis of some (and who knows which) aspects of the pictures. They can learn some really weird things.

    In a well-trained network, the results mostly coincide with our expectations. In a completely isolated domain, like chess or go, a network can be trained sufficiently to perform quite well. However, in open domains, they are fragile: we have no idea when they will break. Look at the video of the turtle being identified as a rifle (in the link above). Why does the identification jump seemingly at random? When will a cat will suddenly be guacamole? When will a pedestrian crossing the road will suddenly be just a pile of leaves? We have no idea, none.

    It is certainly true that selecting and managing training data is a very different task from classic programming. However, it doesn't really take much domain knowledge. In most domains, gathering training data is tedious, not difficult. The hard part comes in figuring out how to make the best use of that data to train and test a network - and that requires a deep understanding of how the neural networks work (and how they don't work). Plus, frankly, a huge pile of trial and error, because there aren't many rules on how to best structure a net for any particular task.

    • by swb ( 14022 )

      I think some of these mistakes are really kind of interesting in an epistemological way. They remind me of a child making what are apparently nonsense associations between things that turn out to be weirdly insightful. Adults don't make the same comparisons mostly because they've been taught they're wrong, not because they actually are.

    • by Anonymous Coward

      Deep Learning is a distinct evolution of back propagation. This is not your father's gradient descent... Deep learning does all sorts of stuff backprop could not do, even with serious hardware. I know, I used to implement back prop on a mini-supercomputer.

    • The machines are just faster, so the networks can be bigger. That's it.

      Not just. People have come up with better ways of training networks too which allows bigger networks to be trained in reasonable time without appaling overfitting.

      People are also coming up wiht inventive loss functions and figuring out differentiable approximations of things we want to optimize, which allows the networks now to be applied to a wider range of problems.

      Look up Generative Adversarial Networks for some rather fun stuff (pix

    • by HuguesT ( 84078 )

      Yes NN are the same basic architecture, but it's like saying we are still programming in C, so nothing has changed since the early days of Unix except computers are faster. You'd be right in a way but not quite.

      In ML we have discovered the importance of sparse representations and regularisation (from wavelets and optimisation) leading to better, more efficient learning methods; better gradient descent methods, and more importantly innovative architectures. The keywords of today are not backpropagation but d

    • by r0kk3rz ( 825106 )

      Neural networks can work really well on specific problem domains. The problem is: You have no idea what they are actually learning. [theverge.com] The features that a network identifies within its layers are not really accessible to us. The problem lies, imho, in the total lack of domain knowledge. Since the network doesn't understand what the objects in those pictures are, they are doing a purely mechanical analysis of some (and who knows which) aspects of the pictures. They can learn some really weird things.

      I think its premature to be calling these things 'Artificial Intelligence', because as you say there doesn't really seem to be a whole lot of intelligence in these systems at all. The way I explain it is by calling them 'Artificial Instinct' machines instead, because that's closer to how these things actually function. The networks build up a set of kneejerk reactions to stimuli which is why they seem to work well for things that humans can do without really thinking about it, like driving cars.

  • If so, at least the article's jumble of catchphrases still moved the universe forward. Yay.
  • Call me... (Score:5, Insightful)

    by sh00z ( 206503 ) <sh00z.yahoo@com> on Monday November 20, 2017 @02:43PM (#55588769) Journal
    ...when you can input a photograph of an airplane and the Navier-Stokes equation, and get a flight simulator as output.
  • Comment removed based on user account deletion
  • by GuB-42 ( 2483988 ) on Monday November 20, 2017 @03:02PM (#55588923)

    Most programming jobs involve connecting stuff together. Converting a database format to another, design a GUI around it, add the entry points to turn it into some kind of module, extract or integrate features, etc... Even machine learning typically involves gathering a bunch of data turn it into a form that's acceptable for the learning module and feeding the results to some other component.
    I don't know how machine learning will help with all that stuff. An AI won't write a video game, it can help making mobs smarter, generating convincing maps or optimizing revenue. But in the end that's just a module connected to other modules, and programmers will be needed to put the round peg into the square hole.
    It will make things a bit more high level, as always. But except for a bunch of PhDs, I don't expect major changes in the way people program.

  • I work on mostly CRUD and e-reporting applications. Generally an org wants these kinds of apps to be predictable and reliable, not "organic" (trial and error). I don't see organic learning as a viable way to program such in the future.

    However, I can see AI being used to test the apps and find potential bugs in the source code, being that "suspicious pattern detection" is something it can do relatively well. It may also suggest code, schema, and UI refactorings. But such AI would be an adviser to programmer

  • This sounds suspiciously like a lot of 4GL promises that were made in the 80s and 90s. They also sound like the kinds of promises made by Microsoft promoting their distributed data model based on Office. Many times I've seen users get in over their heads with systems that start out easy, but get complicated quickly. Worse, sometimes they ended up with processes that produced erroneous data. Ultimately, they resort to piling the whole smoldering hot mess onto the programmers, who have to "make it work" s

  • No (Score:4, Insightful)

    by Jezral ( 449476 ) <mail@tinodidriksen.com> on Monday November 20, 2017 @04:30PM (#55589797) Homepage

    For computational linguistics (translation, analysis, etc), machine learning is not a net gain. What ML proponents forget to factor in is the vast time spent on gathering and hand-annotating large quantities of text (gold corpora).

    Even worse, for many many languages, these gold corpora simply do not exist and there are no plans on making them, or they are too small to be used for ML.

    And even when the gold corpora do exist, models trained on them become tightly coupled with the data. They become domain specific. In order to escape domains, you need an order of magnitude more data.

    Instead, one can make a domain-independent rule-based system in a fraction of the total time spent on machine-learning models. But rule-based has become this weird anathema - people will even write papers that use rule-based methods, while hiding it behind machine-learning terms.

    I'm sure this also holds for other fields.

  • by HuguesT ( 84078 ) on Monday November 20, 2017 @07:59PM (#55591397)

    In my line of work I use a lot of mathematical optimisation. As Stephen Boyd [stanford.edu] says in his course, everybody working in optimisation has at some point this epiphany: "everything is an optimisation problem". And this is true. However to make it work you need to be very good at mathematical modelling, you need to know your methods, and most of the time the problem is unsolvable anyway by the classic methods.

    In this instance maybe a lot of programming can be modelled by some deep NN. However you have to come up with a relevant architecture for your problem, you need to train it, and you need to evaluate it. It may save you time to do so, but if you need so solve something like FizzBuzz, that may not be the best way [joelgrus.com].

  • Oh dear God please no. The Basic, PHP and Microsoft paper admins who can't program their way out of a paper bag will now all become Subject Matter Experts on everything to help us all out.

    "Why does that go there? Things doesn't work unless it does." "Wha's your problem, bud? Those leaks in the damn have always just been there, don't worry about them."

    And you thought things were bad now -- just wait until NO ONE knows what's actually going on, they only know what's SUPPOSED to be happening.

    Of cours
  • Public Service Announcement: deep learning is not even Turing complete. It is simply fancy nonlinear regression that works well on hierarchically-ordered domains.
  • Not only eating software, it's eating disk like crazy. I've seen millions poured into this deep learning stuff at a governmental level and after a year of buying expensive servers with lots of CPU and lots of memory and lots of NVidia drivers running Linux with even big frickin' Oracle databases or hadoop we get - TRASH!

    Sounding more like it did in the 1980s... bit promises... short on reality.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...