[HN Gopher] Swift for TensorFlow Shuts Down
       ___________________________________________________________________
        
       Swift for TensorFlow Shuts Down
        
       Author : high_derivative
       Score  : 279 points
       Date   : 2021-02-12 19:21 UTC (3 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | high_derivative wrote:
       | S4TF from my (external) perspective was always more a (very
       | cool!) programming language research project than anything else,
       | at least since Lattner left.
       | 
       | I would personally assume the shutdown was due to a combination
       | of reasons:
       | 
       | - There simply being no good reason for Python users to ever move
       | to Swift. There is no big painpoint being solved for the broad ML
       | user community
       | 
       | - Organisational momentum lost with Lattner leaving
       | 
       | - General disarray of TensorFlow as a project, the parallel rise
       | of Jax from/within prominent Google orgs, it being the cool new
       | thing
        
         | skohan wrote:
         | - There simply being no good reason for Python users to ever
         | move to Swift. There is no big painpoint being solved for the
         | broad ML user community
         | 
         | As a swift and python user, I would have been _really_ happy to
         | be able to use swift for ML applications. Having a half way
         | decent type system solves so many problems. But while I can see
         | that from my vantage point, I know for a vast majority of the
         | ML community python is  "good enough" and there would be quite
         | a lot of inertia to overcome in that regard which is probably
         | not realistic.
        
           | marcinzm wrote:
           | I'm not sure if the Swift type system is half way decent in
           | terms of machine learning applications. At least not without
           | a lot of work. The data tends to be in specialized in-memory
           | formats that tend to be opaque to the type system. The actual
           | matrix operations involve dimension parameters that are
           | likewise opaque to most type systems. So you still need to
           | run code for the actual tensor graph compilation to validate
           | if everything is correct. Python in notebooks does that
           | instantly, Swift I suspect has a much longer delay.
        
             | skohan wrote:
             | I don't see any reason the swift type system could not
             | accommodate the types required for ML. I did a lot of work
             | with computer graphics in Swift, which is all linear
             | algebra also, and the type system was a huge benefit.
             | 
             | Also I don't see why Swift would perform worse than python
             | in any case. If python is fast, it's because it's wrapping
             | a C++ library. There's no reason Swift could not wrap the
             | same library, but on top Swift could communicate better
             | type information back to the user.
        
             | BadInformatics wrote:
             | In addition to what the sibling said, a big part of the
             | S4TF project was proposing changes to Swift's type system.
             | I'd also point out that there are not one, not two, but
             | three type checkers looking into tensor typing for Python.
             | What's more, Guido is on that committee!
        
               | marcinzm wrote:
               | Which is sort of my point. If you have to rewrite or
               | expand significant parts of the the type system anyway is
               | there any advantage over just doing the same work for
               | Python?
               | 
               | There's many other advantages of Swift over Python for ML
               | (concurrency and performance) but I just don't see the
               | type system as one. At least not anymore.
        
               | BadInformatics wrote:
               | I mean yes, because a) there's less to change, b) you
               | still have nice features like protocols and c) Python's
               | type system is pretty unsound. My point was that saying
               | ML is too complex for type systems is really a cop out,
               | and that has been borne out by how many languages have
               | started experimenting with more complex types for this
               | exact purpose.
        
           | swagonomixxx wrote:
           | > Having a half way decent type system solves so many
           | problems
           | 
           | What problems does it solve for you? When building ML
           | applications I mean. Because that's what S4TF was supposed to
           | be.
           | 
           | ML researchers produce models that are used by ML
           | applications for a particular purpose. The code to train the
           | model _could_ be statically typed, sure, but I really don't
           | see what the improvement would be. It would be more verbose,
           | less readable, have significantly more noise. Just try
           | reading some TensorFlow C++ code equivalents of their Python
           | API.
           | 
           | From the "WhySwiftForTensorFlow.md" document (https://github.
           | com/tensorflow/swift/blob/main/docs/WhySwiftF...):
           | 
           | .. static types: * can catch bugs at compile time instead of
           | runtime. People get very frustrated when a silly type error
           | brings down a long training run hours into it. * directly
           | improve tooling experience like code completion/intellisense,
           | jump to definition, etc. * make it easy to know what
           | operations are Tensor operations.
           | 
           | > can catch bugs at compile time instead of runtime.
           | 
           | OK, but what bugs? For most use cases of TensorFlow that I've
           | seen, if there is a type error somewhere, you will catch it
           | long before it hits production, and if you only catch it
           | 100hr into training, your training code is either non-
           | deterministic or there's some other issue that is not type
           | related. Unless we're talking about dependent types a-la
           | Idris, I don't see how Swift's type system can help catch
           | these kinds of bugs.
           | 
           | > directly improve tooling experience like code
           | completion/intellisense
           | 
           | Sure, but type-hints provide the same improved experience
           | with language servers like PyLance [0]. Not a reason to jump
           | to a new language.
           | 
           | > make it easy to know what operations are Tensor operations.
           | 
           | Easy. Anything that starts with "tf." is a tensor operation.
           | 
           | Yes, I'm being a bit trite, but I was never convinced by the
           | premise of S4TF, because I've never heard an ML researcher or
           | engineer say "static typing will fix my problems".
           | 
           | [0]: https://devblogs.microsoft.com/python/announcing-
           | pylance-fas...
        
             | skohan wrote:
             | My guess is that's because ML researchers typically have
             | not spent much time with statically typed languages.
             | 
             | I was also once convinced that static typing was not so
             | valuable - when I was working a lot with js, python and
             | ruby, but the more time I spend with static typing the more
             | I like it.
             | 
             | There is an "activation energy" to overcome with static
             | typing: when you first start it feels like a tedious burden
             | to have to add type annotations, when your python code ran
             | just fine without them. But in my experience, with a little
             | bit of practice the type system starts to feel completely
             | effortless, and it saves you from all kinds of strange
             | runtime issues you run into when nobody is checking your
             | types.
             | 
             | Also type systems encode a lot of intention into the code.
             | If I look at a function signature in Swift, I know exactly
             | what has to be passed in, and exactly what will come out.
             | In python, I have to rely on documentation, or even maybe I
             | have to dig into the source to see what this function
             | expects.
             | 
             | In my experience, the real difference is seen when you come
             | back to a weakly-typed language after working with types
             | for an extended period. Returning to Python after spending
             | time with Rust or Swift feels like walking down the street
             | with one of my senses missing. There are whole categories
             | of dangers I now have to watch out for which the compiler
             | would have caught for me automatically. It is not pleasant
             | to go back.
        
               | swagonomixxx wrote:
               | I am totally with you on statically typed vs. dynamically
               | typed languages _in general_. I'm not denying that there
               | are major advantages to using one for most kinds of
               | programs. But I just don't see a benefit for ML.
               | 
               | You can easily find out if you like it or not, you can
               | use the C++ API for TF/PyTorch and you'll have a
               | statically typed program running your model for you.
               | Sure, it's not Haskell, but C++ is statically typed, and
               | although it's not the most popular or loved language on
               | HN, you can use it to get stuff done.
        
               | marcinzm wrote:
               | I've done a lot of statically typed code and have gone to
               | conference where Haskell was on the keynote. I've done ML
               | work in both dynamically and statically typed languages.
               | For the ML work I do I don't honestly see the advantage.
               | Most of the complexity is in the matrix and data
               | operations which existing type systems don't check. And
               | the type annotations and IDE support is good enough to
               | catch most of the other boring errors. In my experience
               | most deep learning ML models are also not particularly
               | complicated code from a code perspective.
        
       | lostmsu wrote:
       | And TensorFlow for C# is alive and kicking:
       | https://losttech.software/gradient.html
       | 
       | I guess this is sort of an ad.
       | 
       | Swift was a weird choice for statically typed TensorFlow, being
       | only popular on the platform, that does not have GPU/TPU support
       | in TensorFlow, which is, basically, a requirement for any serious
       | work. The fact, that they had to fork the compiler did not help
       | either.
       | 
       | TensorFlow for C# is much like TensorFlow for Swift. There's a
       | statically typed binding to the core types with the rest of the
       | TensorFlow API available through an open source Python.NET
       | project [1]. Unlike Swift version though, that second part is
       | also mostly statically typed, so you can get IDE autocompletion
       | hints. Also, .NET runtime has native support for dynamic
       | languages.
       | 
       | Like with Swift for TensorFlow, we ported all recent interesting
       | neural network architectures (and maybe even more): Transformers
       | (GPT-2) [2], CNNs (YOLOv4) [3], Q-learning (e.g. RL, actor-
       | critic) [4], and even some cool ones, that have lots of
       | unexplored potential like Siren [5] (this one uses sin/cos as
       | activation functions).
       | 
       | Although we have not worked on automatic differentiation yet,
       | unlike Swift, it will not need compiler support. .NET (and Java)
       | can inspect and generate code at runtime, so autodiff can be
       | implemented in a library.
       | 
       | We also have integration with Unity ML Agents for training
       | robotic agents [4].
       | 
       | [1] https://github.com/pythonnet/pythonnet/
       | 
       | [2] https://github.com/losttech/Gradient-
       | Samples/tree/master/GPT...
       | 
       | [3] https://github.com/losttech/YOLOv4
       | 
       | [4] https://github.com/losttech/Gradient-
       | Samples/tree/master/RL-...
       | 
       | [5] https://github.com/losttech/Siren
       | https://vsitzmann.github.io/siren/
        
       | layoutIfNeeded wrote:
       | Cool! One special case less in the Swift language (automatic
       | differentiation).
        
       | The_rationalist wrote:
       | It would have been much more productive to improve the Kotlin
       | programming language support instead.
        
         | BadInformatics wrote:
         | Well, it's happening:
         | https://news.ycombinator.com/item?id=25128251
        
       | joeblau wrote:
       | I'm looking through the meeting notes[1] linked from the archive
       | PR[2] and it looks like there hasn't been much activity on the
       | project recently. Can anyone with more insight provide an
       | explanation on what happened?
       | 
       | [1] -
       | https://docs.google.com/document/d/1Fm56p5rV1t2Euh6WLtBFKGqI...
       | 
       | [2] -
       | https://github.com/tensorflow/swift/commit/1b1381ccb89a342ba...
        
         | rezahussain wrote:
         | https://twitter.com/bradlarson/status/1360230872658698244
         | 
         | https://twitter.com/texasmichelle/status/1360287563898974211
         | 
         | I think we just have to wait for the meeting video to be posted
        
       | amkkma wrote:
       | Julia is now the only viable next gen ML language
        
       | flipgimble wrote:
       | Another sad lesson why no open source developer should trust
       | Google with their time and effort.
       | 
       | Swift is a gem of a language with an unfortunately Apple-centric
       | ecosystem, but at least you know Apple won't abandon the effort.
        
         | mcast wrote:
         | Let's be honest, Google still contributes more heavily to OSS,
         | whitepapers, and security research than Apple does.
        
       | amelius wrote:
       | Good. Nobody benefits from fragmentation in deep learning.
        
       | A12-B wrote:
       | I recall many users telling me swift would be the new python and
       | that it would take over the machine learning domain.
        
       | BasedInfra wrote:
       | How is swift for stuff beyond applications for the apple
       | ecosystem?
        
       | UncleOxidant wrote:
       | This seems like good news for Julia. Some wondered why Google
       | would develop S4TF with an LLVM backend when Julia already
       | existed and had better support for things like linear algebra
       | than Swift had.
        
       | ipsum2 wrote:
       | No surprises there. There were maybe a total of 5 people excited
       | about adding automatic differentiation to Swift. Too bad they
       | didn't try improving Julia instead.
        
         | favorited wrote:
         | > adding automatic differentiation to Swift
         | 
         | That work is still happening (as noted in the link). One of the
         | primary engineers who was working on it at Google works at
         | Apple now.
        
           | ncmncm wrote:
           | Speaking of shuffling, Dave Abrahams, (who was?) lead on the
           | Swift standard library, moved to Google sometime back. Is he
           | still doing Swift there?
        
         | _Wintermute wrote:
         | I remember their terrible article trying to justify why Swift
         | was the right choice out of a load of more appropriate
         | languages. I'm still annoyed they didn't go with Julia.
        
           | cglong wrote:
           | Source? Curious to read that :D
        
             | _Wintermute wrote:
             | Here you are: https://github.com/tensorflow/swift/blob/main
             | /docs/WhySwiftF...
        
         | remexre wrote:
         | Well, Julia already got it via
         | https://github.com/FluxML/Zygote.jl
        
         | xiphias2 wrote:
         | While I love using Julia for many things, I prefer having a
         | Python monoculture for AI to be able to mix and match different
         | algorithms.
         | 
         | At the end the engine that compiles the mathematical expression
         | to hardware is what matters, and I don't think that LLVM IR
         | that uses is the best IR for the optimizations.
        
           | PartiallyTyped wrote:
           | > I prefer having a Python monoculture for AI to be able to
           | mix and match different algorithms.
           | 
           | I do not. Unfortunately, high performance and python do not
           | go hand in hand. Yes, I know the heavy lifting is done by
           | C/C++/Rust/Cuda/Blas/Numba and so on, but, when you run
           | simulations for millions of steps, you end up with billions
           | of python function calls.
           | 
           | Afaik only Jax actually performs any optimizations because it
           | constructs an analytical gradient. Zygote seems to be able to
           | that and more on the LLVM IR level which, I think should
           | enable more optimizations.
        
             | wsmoses wrote:
             | The name of the LLVM AD tool is actually Enzyme
             | [http://enzyme.mit.edu/] (Zygote is a Julia tool)
        
               | eigenspace wrote:
               | I think they probably mistook julia's IR from LLVMs IR
        
             | swagonomixxx wrote:
             | I'm not sure I understand what your getting at. If Python's
             | performance in the ML space is not sufficient, then the
             | community would have quickly moved on from it and built
             | something better.
             | 
             | And that something better is certainly not Julia and it's
             | definitely not Swift.
             | 
             | > Yes, I know the heavy lifting is done by
             | C/C++/Rust/Cuda/Blas/Numba and so on, but, when you run
             | simulations for millions of steps, you end up with billions
             | of python function calls.
             | 
             | For ML purposes, if your model isn't running on the GPU, it
             | doesn't matter if you're using Swift, Rust, or whatever,
             | your stuff is gonna be slow. Like it or not, Python is one
             | of the best glue languages out there, and the reason why
             | libraries like TensorFlow and Torch are not used in their
             | native languages (C++) [0] is because they're significantly
             | simpler to use in Python and the performance overhead is
             | usually one function call (e.g run_inference(...)) and not
             | billions.
             | 
             | If you find yourself writing a simulation, and you need to
             | optimize away billions of function calls, you can use the
             | C++ API provided by TensorFlow.
             | 
             | [0] https://www.tensorflow.org/api_docs/cc
        
               | PartiallyTyped wrote:
               | > If Python's performance in the ML space is not
               | sufficient, then the community would have quickly moved
               | on from it and built something better.
               | 
               | Python's performance is sufficient when the bottleneck is
               | actually the computation done in the accelerator. In my
               | flavour of ML, we use small models, think 3 layer NN 64
               | neuron wide and in some cases a small CNN. During
               | training, most of the models reported using <150MB.
               | 
               | Most of the community finds python sufficient because
               | they do not need to interleave training and simulating.
               | 
               | > For ML purposes, if your model isn't running on the
               | GPU, it doesn't matter if you're using Swift, Rust, or
               | whatever, your stuff is gonna be slow. Like it or not,
               | Python is one of the best glue languages out there, and
               | the reason why libraries like TensorFlow and Torch are
               | not used in their native languages (C++) [0] is because
               | they're significantly simpler to use in Python and the
               | performance overhead is usually one function call (e.g
               | run_inference(...)) and not billions.
               | 
               | You don't know in what I am running/doing, hence your
               | comment comes off as ignorant.
               | 
               | This is the setup:
               | 
               | Run X number of simulated steps on CPU, collect X
               | samples, store them in a buffer of size either X, or
               | Z>>X, train for Y number of steps sampling from buffer,
               | copy model to CPU, repeat for total of 1M steps, repeat
               | 10+ times to get a good average of the performance.
               | 
               | All of that without any hyper parameter tuning.
               | 
               | Now, you also need to note that a non trivial amount of
               | work is also done in python to augment the inputs as
               | necessary. If the work is done in numpy, there's usually
               | little overhead, but, it is often the case that the
               | environment I am simulating is wrapped in wrapper
               | functions that modify the behavior, e.g. I may need to
               | remember the last 4 instances created by the environment,
               | and so on. All these modifications quickly accumulate and
               | for a single experiment of 1M steps, you end up with
               | billions of python calls. The community uses more or less
               | the same package/framework as the intermediary between
               | simulations and models.
               | 
               | The issue is so prominent, that the community as a whole
               | moved away from running in single core the simulations,
               | to having multiple parallel actors to collect
               | transitions/data points, this also requires new theory.
               | Furthermore, there have been many proposed architectures
               | for distributed and asynchronous training because the
               | bottleneck is not the GPU or the model, but rather, how
               | fast you can collect transitions. Infact, there was a
               | distributed architecture by google that literally sends
               | the transitions over the network into a few GPUs, the
               | reason is that the network cost is amortized because you
               | get to run hundreds of simulations concurrently.
               | 
               | IIRC, a maintainer of a popular project/framework that I
               | contribute saw improvements upwards of 2x when using C++
               | over python.
        
               | swagonomixxx wrote:
               | > Most of the community finds python sufficient because
               | they do not need to interleave training and simulating.
               | 
               | That's kind of my point. Most of the community has models
               | running on GPU's and don't care too much about CPU-bound
               | workloads, training or otherwise.
               | 
               | If you do care about that, you are in a relatively small
               | niche of machine learning, statistically speaking. I am
               | not denying it's existence, I'm just saying that your
               | stack will have to be different if you want to extract
               | the maximum level of performance out of your CPU.
               | 
               | > IIRC, a maintainer of a popular project/framework that
               | I contribute saw improvements upwards of 2x when using
               | C++ over python.
               | 
               | That's not surprising at all. Like I mentioned in the
               | parent, if you profiled your code and you found that
               | Python function calls are the main bottleneck, and you
               | believe it to be worth investing time in getting rid of
               | them, you can use the C++ API's of
               | Caffe/TF/PyTorch/Whatever.
               | 
               | I personally don't work in simulations so I haven't ran
               | into your problem. In the deep learning world, the CPU is
               | unusable for any task (training, inference, evaluation,
               | etc.), so I've never been concerned with things like
               | function call overhead.
        
               | amkkma wrote:
               | >If you find yourself writing a simulation, and you need
               | to optimize away billions of function calls, you can use
               | the C++ API provided by TensorFlow.
               | 
               | I prefer to just use Julia and get that for free. Also
               | can write custom cuda kernels in pure Julia. And
               | differentiate through arbitrary julia code, compose
               | libraries that know nothing about each other etc
        
               | swagonomixxx wrote:
               | Sure, that's fine. But I'm assuming GGP is talking about
               | TensorFlow, since that's what this post is about. If you
               | don't need TensorFlow, then this whole conversation is
               | kinda moot, and you can use whatever you like.
        
               | amkkma wrote:
               | Nobody needs tensorflow. That's just a kludge around the
               | fact that python is terrible for ML.
        
               | swagonomixxx wrote:
               | So nobody needs TensorFlow, yet it and PyTorch are pretty
               | much the only frameworks widely adopted for ML, and
               | they're in Python. A quick Google search will tell you
               | that, I don't really feel like rebutting to a baseless
               | statement.
        
           | ChrisLomont wrote:
           | >At the end the engine that compiles the mathematical
           | expression to hardware is what matters,
           | 
           | This is becoming increasingly outdated as significant parts
           | in scientific machine learning require parts to be written
           | that don't simply compile to GPUs. Think, for example, when
           | you mix a physics model, say for RF or some such, and deep
           | learning to model parts of the function. In python, you
           | cannot write the RF model because python is vastly too slow,
           | so you're forced to write the RF model in something fast,
           | like C/C++, then integrate that to python, then integrate
           | that to your favorite tensor network, needing more languages
           | than you can do immediately with Julia.
           | 
           | Deep learning is moving rapidly out of simply being a tensor
           | engine, and being a tool in much larger problems, where many
           | high performance pieces need developed. Julia is light years
           | ahead of Python for these domains, and I cannot see Python
           | ever catching up because it suffers from performance and/or
           | multiple language problems to solve these.
           | 
           | If you've never learned about scientific machine learning -
           | go read some or watch some videos. It's fascinating and
           | growing rapidly.
        
           | BadInformatics wrote:
           | > I prefer having a Python monoculture for AI to be able to
           | mix and match different algorithms
           | 
           | Given how divergent the current crop of ML frameworks are, is
           | this really a realistic expectation? Having played around
           | with Julia and Flux for ML, I find I have to do just as much
           | rewriting when translating e.g. TF -> Flux as TF -> PyTorch.
           | You get some limited mixing and matching with Caffe2 <->
           | Torch and TF <-> JAX, but that breaks down the moment you
           | leave a company's walled garden.
           | 
           | > I don't think that LLVM IR that uses is the best IR for the
           | optimizations.
           | 
           | I think Chris Lattner agrees, which is why he also helped
           | start https://mlir.llvm.org/. If anything, I predict we'll
           | see more frameworks targeting it (prototypes for Numpy,
           | PyTorch, TF and general XLA already exist). This implies that
           | languages that target LLVM now will actually have a leg up
           | because their compiled semantics can be more easily lowered
           | to something accelerator-friendly.
        
             | iujjkfjdkkdkf wrote:
             | I agree that there is poor interoperability between the
             | different DL / auto-diff frameworks in python. But I
             | suspect the GP is referring also to things like scikit
             | learn, numpy, plotting tools, PIL, opencv, pandas, etc.
             | that make up the python ecosystem. I know that alternatives
             | exist in other languages, but I don't know of an overall
             | ecosystem that is as mature and interoperable as in python.
        
               | amkkma wrote:
               | Then you haven't looked at Julia's ecosystem.
               | 
               | It may not be quite as mature, but it's getting there
               | quickly.
               | 
               | It's also far more interoperable because of Julia's
               | multiple dispatch and abstract types.
               | 
               | For example, the https://github.com/alan-turing-
               | institute/MLJ.jl ML framework (sklearn on steroids),
               | works with any table object that implements the Tables.jl
               | interface out of the box, not just with dataframes.
               | 
               | That's just one example.
        
               | iujjkfjdkkdkf wrote:
               | To be clear, I'm a big Julia fan and would love to be
               | able to move to Flux.jl (I have not looked at the
               | framework you linked but will now). But Julia is still
               | overall more fragmented, sparsely supported, and not as
               | easy to work with as python for ML projects. If I have a
               | budget to work against, I cant justify trying to deal
               | with the rough edges in Julia.
        
               | BadInformatics wrote:
               | I would've assumed so too had they not mentioned both AI
               | and algorithms. That to me strongly implies deep learning
               | frameworks. Also, stuff like https://data-
               | apis.github.io/array-api/latest/ exist _because_ of how
               | many inconsistencies are present among related libraries
               | in the Python ecosystem.
        
       | meteor333 wrote:
       | I love Swift and it's potential for use cases beyond just iOS
       | ecosystem, but I think Apple never fostered or encouraged the
       | open source community to involve in its development.
       | 
       | It's another classic example where it's simply not enough to open
       | source something, you have to really allow and encourage the
       | developers to contribute to increase the adoption.
        
         | skohan wrote:
         | I was active in the Swift language community for a couple
         | years, and I think it was not just a lack of encouragement, but
         | rather a sense of neglect of use-cases outside the Apple
         | ecosystem.
         | 
         | For instance, maybe it's fixed now, but for _years_ running the
         | Swift REPL on linux would spit out a few error messages every
         | time you ran it. It still worked, but it gave the impression of
         | being poorly supported.
         | 
         | What really killed it for me was the rollout of features like
         | FunctionBuilders (now result builders) and property wrappers.
         | These were just basically crammed into the language in a half-
         | finished state with no community review, to support the
         | requirements for SwiftUI, despite many other features
         | languishing for years and not being implemented due to concerns
         | about how they would affect swift's "design space".
         | 
         | Following that, I had the impression that Swift was and would
         | always be prioritized towards Apple's goals, and any use-case
         | outside of this would be a distant follower.
        
         | Hamuko wrote:
         | What's Apple's incentive to do so?
        
           | skohan wrote:
           | Theoretically if Swift was an extremely marketable language
           | across many domains, there would be more qualified developers
           | available to work on software for Apple products.
        
             | Hamuko wrote:
             | But hasn't Apple managed to force a lot of developers to
             | pick up the language anyways?
        
               | skohan wrote:
               | Apple is one of the most successful organizations on the
               | planet, and yes they have plenty of developers in tow. I
               | am just trying to give an example of how it could help
               | their bottom line to push open source swift more
               | aggressively.
               | 
               | If I had to guess, they don't do it because:
               | 
               | 1. they don't think the resources it would take would
               | represent a good ROI,
               | 
               | 2. and/or they think it's good for them that developers
               | who invest in their ecosystem have skills which are not
               | transferrable to other domains
        
           | darzu wrote:
           | Same as MSFT's with C#?
        
         | timtimmy wrote:
         | I'm excited about a few projects bringing Swift to other
         | platforms. SwiftWasm is surprisingly painless to get working
         | [1]. Tokamak is an early stage SwiftUI-compatible framework for
         | building web apps [2]. It also has the goal of supporting other
         | native platforms and to that end there is early work on a GTK
         | backend. I have no affiliation with these projects, though I
         | would like to contribute to them at some point.
         | 
         | [1] https://swiftwasm.org/
         | 
         | [2] https://github.com/TokamakUI/Tokamak
        
       | belval wrote:
       | Swift for TensorFlow was a nice idea on paper, but you need more
       | than ideas to push a project forward. You need people who
       | actually want to use your product.
       | 
       | For all the comments on HN about how great Swift was, there were
       | hundreds if not thousands of ML engineer and research scientist
       | who did not know it existed and frankly did not really care about
       | it either.
       | 
       | Swift for Tensorflow was not addressing the issues of ML in
       | productions and felt like "Just another framework" but with the
       | added handicap of needing its users to learn an entirely new
       | language which is not something most scientists want to do. One
       | might argue that engineering might be more interested, but in a
       | lot of cases the science team will push the models and then the
       | engineering team will put them in production, leaving very little
       | room to migrate from TF/Pytorch to Swift for TF.
        
         | SatvikBeri wrote:
         | It's interesting to compare Swift to Julia here - Julia had a
         | base of people who absolutely loved it from the 0.2 days, and
         | has managed to draw many users from Python/Matlab/R. There are
         | many reasons for that, but overall Julia had something that
         | really appealed to scientists: syntax they liked + competitive-
         | with-C performance.
         | 
         | In contrast, S4TF's target audience seems to have been Swift
         | developers, and they didn't really appeal to the data science
         | crowd.
        
           | belval wrote:
           | Precisely, S4TF felt like a startup product with no market
           | research. It really felt like the only reason Swift was
           | chosen was because the main dev created it.
        
       | cs702 wrote:
       | S4TF lost out, not to Python, but to Python's AI/ML ecosystem --
       | people, projects, libraries, frameworks.
       | 
       | Despite its many shortcomings, Python has become the _lingua
       | franca_ of AI and ML, to the point that whenever I come across a
       | newly published AI or ML paper that I find interesting, I
       | _expect_ to be able to find code implementing it in Python.
       | 
       | For example, yesterday I saw a post on HN about approximating
       | self-attention matrices in transformers, which have O(n2)
       | computational cost, with a seemingly clever approach that has
       | O(n) computational cost:
       | https://news.ycombinator.com/item?id=26105455 . "Huh, that looks
       | interesting," I thought, "let me see if I can find some code."
       | Three clicks later, I found myself at
       | https://github.com/mlpen/Nystromformer -- the official
       | implementation... in Python. A few moments later, I was playing
       | with this thingamajiggy. No other language comes close to having
       | this kind of ecosystem in AI and ML.
       | 
       | Julia still has a shot at becoming a viable alternative to
       | Python, especially if the Julia developers can shorten the "time
       | to first interactive plot" to make it as fast as Python's, but
       | they face an uphill battle against such an entrenched ecosystem.
        
         | lostmsu wrote:
         | Nystromformer attention layer is actually pretty simple. You
         | can reimplement it in a few minutes.
        
           | cs702 wrote:
           | Sure, but isn't it nice that you don't even have to read the
           | paper to play with the model, to see if the claims hold up?
        
         | onelovetwo wrote:
         | Its becoming like the same grasp js has on the web...which is
         | sad to see.
        
           | belval wrote:
           | This is a false equivalence of the first order. What makes JS
           | impossible to dethrone is that whatever replaces it would
           | need to be supported by a critical mass of web browsers,
           | which just won't happen.
           | 
           | AI/ML on the other hand has no such restriction. Your new
           | startup could build all their ML models in Java with
           | DeepLearning4J and it would probably work fine. Python is
           | used because it's easy to work with for experimenting, which
           | is a big part of research and for production your can extract
           | your weight and graph structure to run on TVM/TensorRT/ONNX.
           | There is no equivalent vendor lock-in.
        
       | chrisaiv wrote:
       | Is if safe to assume that this is the main driver for this event?
       | https://www.businessinsider.com/chris-lattner-former-apple-g...
        
       | skohan wrote:
       | It's a shame. I had high hopes at the beginning that S4TF - and
       | the investment in Swift from Google - would help Swift break out
       | of the iOS ghetto and cement it as a mainstream language.
       | 
       | Swift's a delightful language to use. It has a lot of the nice
       | things about Rust's type system, but is a heck of a lot easier to
       | use at the expense of a bit of performance. For a lot of use
       | cases, I think this is a great value proposition. ML/data science
       | is a great example where practitioners could benefit from a more
       | grown-up type system than what Python has on offer, but would
       | still be able to keep low level details at arms length.
       | 
       | I think Serverless would be another ideal use-case for Swift,
       | where the productivity, clarity and correctness tools it offers
       | would be a huge benefit.
       | 
       | Some very interesting things came out of the S4TF project - like
       | the work on autodiff, and python interop. It's a shame the
       | project never really seemed to get legs, it seems like it just
       | languished for years with no clear direction.
       | 
       | These days I do most of my work in Rust, and I'm happy to do so
       | because the tooling, community, and ecosystem is really amazing.
       | But there are a lot of language concepts and features I miss from
       | swift. I guess it goes to show that the governance and
       | availability of a language have a lot more to do with adoption
       | than the merits of the language itself.
        
         | ragnese wrote:
         | I agree and am also disappointed. On the other hand, I really
         | don't miss Swift outside of iOS because of Rust and Kotlin.
         | 
         | I wonder if Kotlin is too far from "arms length" from the low
         | level details in your mind? Because other than that, I actually
         | prefer the language over Swift, generally.
        
           | RhodesianHunter wrote:
           | Agreed. Thomas Nield has some interesting talks and libraries
           | available for doing data science work in Kotlin.
        
           | The_rationalist wrote:
           | Kotlin is one of the very few languages to have a compiler
           | plugin API, this allows Facebook researchers to make it auto-
           | differentiable on demand
           | https://news.ycombinator.com/item?id=25128251
        
         | wendyshu wrote:
         | What are some ways in which Python's type system should
         | improve?
        
           | darthrupert wrote:
           | Have an effect at runtime.
        
             | wool_gather wrote:
             | What does this mean? Runtime is the _only_ time that Python
             | 's existing type system has an effect.
        
               | BadInformatics wrote:
               | GP was referring to type hints and how they don't provide
               | any runtime validation. There are libraries like Pydantic
               | that let you do some, but nothing built into the language
               | runtime IIRC.
        
           | suyash wrote:
           | First, managing python dependencies and versions, current
           | system is horrible. Focus on fixing that, then performance.
           | Swift is much faster than Python. Third, Swift is a much
           | advance and intelligent language to program in than Python.
           | When I write Swift code, I feel like a PhD and when I write
           | Python, I'm back in elementary school.
        
             | swagonomixxx wrote:
             | It doesn't matter that "Swift is faster than Python". Your
             | TensorFlow graph is gonna be slow as all hell if it's
             | running on the CPU. The bottleneck here really isn't
             | performance on a CPU. That's a no-brainer for all non-ML
             | applications.
             | 
             | But for applications that require a GPU (i.e, most ML
             | applications) cutting over to Swift from Python will likely
             | win you nothing in performance, and wouldn't be worth it at
             | all.
        
           | superbcarrot wrote:
           | It fundamentally can't be improved in a significant enough
           | way and still be Python. The more realistic options are to be
           | okay with Python's type system (I personally am) or to look
           | at different languages if you really want something more
           | robust.
        
             | 6gvONxR4sf7o wrote:
             | I don't think that's accurate. Python's type system is
             | largely defined by mypy + the annotations, in addition to
             | runtime python. In that regard, python's type system has
             | been evolving really quickly, and it's becoming something
             | quite interesting. I'm a fanboy of static type systems, and
             | in the last bunch of years, python's type system has gone
             | from just bearable to really useful. I'm excited to see
             | where it goes from here.
        
               | skohan wrote:
               | But to some extent isn't it always going to be bolted-on
               | and dependent on library authors to which extent it's
               | consistent?
        
               | girvo wrote:
               | For what it's worth, TypeScript had somewhat of the same
               | problem, and yet these days it's rare for me to find a
               | well known library that doesn't include type definitions!
        
               | 6gvONxR4sf7o wrote:
               | The dependence on library authors is always a challenge
               | in any language. You might have one author using `[a]`
               | where another uses `PositiveNumeric a, Fin n =>
               | NonEmptyList n a` for the same thing. You can always just
               | annotate whatever the library author used (e.g. they
               | return a list of strings, so you use List[str]).
               | 
               | There are some interesting further add ons that seem very
               | python, allowing you to go further. For example, with a
               | pandas dataframe you can just say your type is a
               | dataframe which isn't so useful, but it's possible to
               | hack your own types onto it in the vein of
               | https://github.com/CedricFR/dataenforce, or use things
               | like https://smarie.github.io/python-vtypes/ to get
               | smarter typing on things the authors didn't type. I
               | expect that trend will continue.
               | 
               | What fascinates me about python's types is actually the
               | very fact that they are bolted on. You have a language
               | that lets you do crazy things and a type system trying to
               | catch up and make it convenient to verify those crazy
               | things. It's a nice complement to the usual developments
               | of verifying all of the things and slowly extending the
               | set of things you can do.
        
             | simias wrote:
             | I agree with you, although I must say that as an occasional
             | Python user the way type hints are implemented absolutely
             | baffles me. In particular the fact that they are completely
             | ignored by the standard implementation. They're glorified
             | comments.
             | 
             | Given how opinionated the Python maintainers can be, it
             | baffles me that they accepted to get these optional, noisy,
             | half backed type hints into the core language.
             | 
             | In my experience given that they're optional and you'll
             | almost never get 100% of your code _and_ its dependencies
             | with correct and up to date signatures it 's just a
             | nuisance. Maybe in the right projects if all the devs are
             | very thorough with them it can be helpful, but that's
             | really not my experience. If at least it triggered an
             | assertion at runtime when the type doesn't match it would
             | be massively more useful. And even then, if you're so
             | thorough with your typing, why not just use a proper
             | statically typed language?
             | 
             | I really don't get why it's even there to be honest.
        
               | mumblemumble wrote:
               | > the fact that they are completely ignored by the
               | standard implementation. They're glorified comments.
               | 
               | Not quite. You can reflect on and interact with them at
               | run-time, too. This does make it possible to implement
               | run-time checking as a library solution.
               | 
               | I can't speak to why run-time checking wasn't built into
               | the language as a default behavior, and the PEP doesn't
               | explain why (though perhaps the answer is in the mailing
               | list), but one possibility is that it would be wasteful.
               | Most functions that are only prepared to accept certain
               | types already have some form of run-time type check baked
               | in, either explicitly or incidentally, so adding a second
               | check would likely introduce overhead without producing a
               | whole lot more type safety.
        
               | swagonomixxx wrote:
               | Type hints are not perfect, but IMO after using them for
               | a while, they are significantly better than nothing.
               | 
               | If you do the legwork, you can get mypy[0] type-checking
               | done on your codebase in a way that is similar to how
               | TypeScript does type-checking. There are stub files that
               | are provided by project maintainers that are then used by
               | mypy to infer things like e.g the return type of a
               | function.
               | 
               | Type hints are also inline in the code, and not
               | technically comments. They can be retrieved from class
               | attributes using facilities from the standard library [1]
               | and can facilitate other tooling that is specific for
               | your project or that are more general.
               | 
               | > And even then, if you're so thorough with your typing,
               | why not just use a proper statically typed language?
               | 
               | That would remove a lot of the benefit of choosing
               | Python. Python is dynamically typed. Type hints make it
               | possible to do type checking statically, but with a lot
               | of extra leg work (as I described above). Making Python
               | itself statically typed is not something that would
               | interest 99% of the Python community IMO.
               | 
               | [0] http://mypy-lang.org/
               | 
               | [1] https://docs.python.org/3/library/typing.html#typing.
               | get_typ...
        
               | laurencerowe wrote:
               | I wish JS had ignored type annotations built into the
               | language so that TypeScript code could run directly
               | without a transpilation step.
        
         | nextos wrote:
         | I love all languages in the ML/Haskell tradition, but I think
         | Julia would have been a better fit because it's dynamic yet
         | efficient, and because it has a really decent
         | probability/statistics/ML ecosystem already. Long term, I think
         | it's the best replacement we have for Python in the ML world.
         | 
         | Python has exceptional libraries but, as a language, it's a bit
         | dated on several fronts. This has an impact on library design.
         | In Python, ML libraries are huge monoliths and they depend on a
         | lot of code written in other languages. They are really hard to
         | understand or modify.
         | 
         | In Julia, things are really small and composable. For example,
         | you have a probabilistic programming library like Turing and a
         | differentiable programming one like Flux, and it's trivial to
         | implement some Bayesian neural networks. Same applies to many
         | other things. It's small and beautiful, but it needs more
         | manpower to compete against Python.
        
           | spamizbad wrote:
           | Julia gets compared a lot to Python but truthfully it strikes
           | me as more like a replacement for FORTRAN, MATLAB, and R. The
           | ergonomics of the language seem good but documentation is
           | poor and as a community it's narrowly scoped into the
           | ML/numerical processing world.
           | 
           | If all you do is ML/numerical processing, against data that's
           | already been cleaned up, I bet it's really great tho.
        
           | swagonomixxx wrote:
           | > Python has exceptional libraries but, as a language, it's a
           | bit dated on several fronts.
           | 
           | I've been using Python for ML for the last 3 years and I've
           | never felt this way. It might be that I'm not all about the
           | hip new languages, but I don't really see the benefit of
           | making Python more ML/Haskell-ish.
           | 
           | The ML use case for Python is roughly as follows: you design,
           | train, and evaluate models. Then, if something is decent
           | enough to use in production, you switch over your application
           | to load and use that instead. I don't really see where
           | Haskell or any language from the ML family can improve in
           | that process.
           | 
           | Sure, the code that you used to implement your model may
           | improve slightly, but I don't see that code improving
           | significantly. The fruit of your labor is usually a protobuf
           | file (encoding the TensorFlow graph) or whatever your
           | framework uses to encode the model you built. The code
           | actually surrounding it is very minimal for most use cases.
           | 
           | > In Julia, things are really small and composable. For
           | example, you have a probabilistic programming library like
           | Turing and a differentiable programming one like Flux, and
           | it's trivial to implement some Bayesian neural networks.
           | 
           | There's nothing stopping you from composing things in Python.
           | But it's simply not a goal of 99% of ML libraries to be
           | composable with other libraries. You're probably never gonna
           | run TensorFlow and PyTorch in the same application (trust me,
           | I've tried, it was a nightmare) and I don't see why you would
           | compose a TensorFlow model with a PyTorch model without
           | incurring tons of overhead in your application around gluing
           | these two things.
        
             | amkkma wrote:
             | >there's nothing stopping you from composing things in
             | Python.
             | 
             | In Julia you can compose a custom distribution, with a
             | bayesian model with an ODE with a neural network with unit
             | number types with custom Julia written CUDA kernels and
             | multithreading.
             | 
             | Edit: That are not designed specifically to work with each
             | other
             | 
             | Can python even hope to do a fraction of that, still be
             | fast and differentiate through everything?
        
               | swagonomixxx wrote:
               | I'm fairly certain everything you said is possible except
               | for custom CUDA kernels in pure Python. You'd have to
               | write the kernel in C++ and use it in your
               | TensorFlow/PyTorch code. [0][1]
               | 
               | [0]: https://www.tensorflow.org/guide/create_op
               | 
               | [1]:
               | https://pytorch.org/tutorials/advanced/cpp_extension.html
        
               | amkkma wrote:
               | It's definitely not possible. These have to be rewritten
               | with the specific autodiff /ML framework in mind.
               | 
               | Even then, you're not going to have fast custom types to
               | be used on the GPU without dropping into C++
        
               | swagonomixxx wrote:
               | > It's definitely not possible. These have to be
               | rewritten with the specific autodiff /ML framework in
               | mind.
               | 
               | I don't understand why it's not possible. You're asking
               | if it's possible in the language. I don't see anything
               | stopping you from, as you say, writing your own
               | framework, or simply doing it in TF. Perhaps my ignorance
               | of Julia is showing :)
        
               | chc wrote:
               | If you're rewriting everything to get the behavior rather
               | than just hooking together preexisting pieces, that's not
               | what's generally meant by composition.
        
               | selectodude wrote:
               | The outcome is more important than the code. While that
               | sounds very elegant, what does it actually add?
        
               | amkkma wrote:
               | https://www.reddit.com/r/Julia/comments/keuumi/bayesian_n
               | eur...
               | 
               | See the paper.
               | 
               | It allows for the ecosystem to have a combinatorial
               | explosion of possibilities as each domain expert works on
               | their own package.
               | 
               | I can't foresee every application, but the emergence is
               | the point. This is just a crazy example to show the
               | bounds of what's possible.
        
           | Karrot_Kream wrote:
           | > In Julia, things are really small and composable. For
           | example, you have a probabilistic programming library like
           | Turing and a differentiable programming one like Flux, and
           | it's trivial to implement some Bayesian neural networks.
           | 
           | I was taken aback when looking at Turing for Bayesian
           | modelling that the distributions were just the standard
           | distributions found in the Distributions package! In Python,
           | every Bayesian framework has its own implementation of
           | everything from distributions to log probabilities, but it
           | all composes in Julia.
           | 
           | > It's small and beautiful, but it needs more manpower to
           | compete against Python.
           | 
           | Agreed. Docs of major projects are still incomplete or non-
           | existent, there's a lot of projects that have been nigh
           | abandoned ( _cough_ naive bayes _cough_ ), and the
           | composability of Julia coupled without strong leadership
           | leads to an ecosystem of overlapping functionality. Still, I
           | hope by leveraging PyCall, Julia can overcome some of these
           | issues.
        
           | ZeroCool2u wrote:
           | function julia_blues(bummers...)
           | 
           | I largely agree, Julia is such a cool language and had so
           | much potential. It definitely surprised me when they went
           | with Swift instead, but realizing that Chris Lattner worked
           | at Google at the time explained a lot. Unfortunately, every
           | time I try to get into Julia, it just feels awkward coming
           | from Python and a bit like stepping back in time.
           | 
           | The stupidest, (stupidest in the sense that I really wish
           | they didn't bother me, because they're silly things!), things
           | that bother me are:
           | 
           | 1. The 'end' keyword. It's everywhere! Loops, if-else
           | statements, function bodies. I mean I understand why it's
           | helpful for parsing and providing more info to the compiler,
           | but honestly, I would've rather we just stuck with curly
           | braces '{}'! At least it's fewer characters to type and it
           | just feels less crowded on screen while reading. It feels
           | like such a petty complaint, but honestly, it feels like I'm
           | writing Pascal or Matlab all over again. Which leads me to my
           | second point.
           | 
           | 2. The default choice of 1 based indexing. I'm not going to
           | go into it, because plenty of people before me have beat this
           | dead horse already[1][2][3], but I can't help but be saddened
           | by this choice and its implications. It's important to
           | acknowledge the fact that Julia started as a competitor to
           | Matlab and Octav, so it makes sense from that perspective.
           | However, it could have been a much more general purpose
           | language, with huge performance benefits over popular
           | interpreted languages like Python, JS, and Ruby. It could
           | have been a unifying force in the scientific computing
           | community, bridging the gap between R and Python users with a
           | greenfield approach that would have been a force to be
           | reckoned with. Instead, rightly or not, it's viewed largely
           | as 'just' a matlab replacement.
           | 
           | Now, regardless of whether 1 or 0 based indexing is truly
           | 'better' or the 'end' keyword is no big deal, the reality is
           | that there's a huge demographic of potential users that won't
           | buy into Julia, because it doesn't quite feel as ergonomic as
           | Python/JS/Ruby and won't take it seriously as a general
           | purpose language, because it looks/feels like Matlab and they
           | only use 'real' programming languages. Again, I'm not saying
           | this is right, but it is the reality we're faced with and it
           | just feels like a huge missed opportunity and bums me out.
           | end
           | 
           | 1. https://github.com/JuliaLang/julia/pull/16260#issuecomment
           | -2... 2. https://groups.google.com/g/julia-
           | dev/c/tNN72FnYbYQ?pli=1 3.
           | https://github.com/julialang/julia/issues/558
        
             | skohan wrote:
             | Language aesthetics do matter. Broadly it seems like we've
             | accepted that "c like" syntax - including brace-delimited
             | blocks and zero-based indexing - should be the norm for
             | programming languages. Any language which goes in a
             | different direction should have a _very_ strong reason to
             | do so, because any deviation will be an obstacle for
             | adoption.
             | 
             | I share your frustration with the `end` keyword - it's just
             | needlessly verbose, and for something used so frequently it
             | makes sense to be as terse as possible.
             | 
             | I have some similar quibbles with Rust syntax: I know it's
             | a minor issue, but I'm really disappointed that snake_case
             | was adopted. It's just ergonomically inferior to camelCase
             | in every measure. It's one more character to type for every
             | word, and on top of that, on a US keyboard, the underscore
             | character is a pinky key way far away from the center of
             | the keyboard. Making this one of the most frequently typed
             | characters in the language makes no sense.
        
               | zozbot234 wrote:
               | snake_case is a _lot_ more readable than CamelCase, which
               | is a huge benefit ergonomically. The keyboard layout is
               | no big deal, one can easily change it or use shortcut-
               | based autocomplete. Rust does use CamelCase for type
               | identifiers, trait identifiers and enum constructors, and
               | the contrast with snake_case also aids readability.
        
               | skohan wrote:
               | I disagree that it's "a _lot_ " more readable. I have
               | read a lot of camelCase code in my lifetime, and I can
               | count on zero hands the number of times I ever had an
               | issue parsing code due to the use of camelCase.
               | 
               | Keyboard remapping seems like an extreme solution, and I
               | don't want to train my fingers in such a way that when I
               | sit down at a different workstation that doesn't have my
               | .vimrc I can't type rust anymore.
               | 
               | You don't need snake_case for contrast. You can use
               | lowerCamel and UpperCamel to denote the same levels of
               | significance. SCREAMING_SNAKE is fine for constants
               | because they don't get used all that often, but rust maps
               | the hardest-to-type case to the most-frequently-used,
               | which in my opinion is an ergonomic failure.
        
             | SatvikBeri wrote:
             | I love Julia in general, but yeah, I hate `end`.
             | 
             | Re: 0-indexing vs 1-indexing. If you use 0-indexing, you
             | turn off a lot of non-engineering scientific programmers.
             | My personal experience is that 0-indexing is better for
             | more engineering applications, while 1-indexing is better
             | for math. I'm a weirdo in that I don't seem to mind either
             | one though.
        
               | skohan wrote:
               | Is there an example of where 1 is better for math, or is
               | it just a familiarity thing?
        
             | adsharma wrote:
             | If it's just the syntax and muscle memory you are quibbling
             | about, you can use python to write code and then transpile
             | it to Julia. Link to github in my other comment in this
             | article.
        
         | qaq wrote:
         | I think given how fast the actor concurrency stuff is
         | progressing it should be fairly good platform for web apps in a
         | few years.
        
         | wrinkl3 wrote:
         | > I think Serverless would be another ideal use-case for Swift,
         | where the productivity, clarity and correctness tools it offers
         | would be a huge benefit.
         | 
         | I agree, but sadly none of the big cloud providers has any
         | interest in pushing it - Google's got Go, AWS and Azure seem
         | focused on Typescript.
        
           | chris_st wrote:
           | I know AWS lets you just deploy a docker container, so you
           | can use whatever tools you want -- anyone know about Azure
           | and Google cloud?
        
             | skohan wrote:
             | Yes theoretically I think there is nothing stopping you
             | from shipping Swift code on AWS. I am pretty sure you could
             | do it in a lambda and you don't even need a docker
             | container.
             | 
             | If you're talking about a container workflow, I'm pretty
             | sure every cloud provider will support this just fine
             | currently.
        
         | liuliu wrote:
         | Yeah. I think unable to merge back into Swift mainline really
         | makes the adoption harder. I made some efforts to having
         | PythonKit / swift-jupyter run smoother and it feels really good
         | as a replacement for Python: https://liuliu.me/eyes/data-
         | science-setup-with-swift-in-2021...
         | 
         | Luckily, I don't think Swift on other platforms is dead.
         | SwiftNIO seems worked quite well as a project. Bazel support is
         | solid. They still release Linux / Windows Swift toolchains.
         | 
         | Also shameless plug, I did implemented my own deep learning
         | framework in Swift: https://libnnc.org/s4nnc/
        
         | mleonhard wrote:
         | > I guess it goes to show that the governance and availability
         | of a language have a lot more to do with adoption than the
         | merits of the language itself.
         | 
         | Nearly every Google project eventually gets abandoned because
         | the engineers that made it get promoted and move on to their
         | next promotable projects. The root cause is the ongoing failure
         | of Google's leaders to align employee incentives with the long-
         | term interests of the company or human society.
         | 
         | Even if S4TF had become popular, I expect it still would become
         | neglected and rot like most Google projects.
        
         | mrtksn wrote:
         | >I think Serverless would be another ideal use-case for Swift,
         | where the productivity, clarity and correctness tools it offers
         | would be a huge benefit.
         | 
         | Oh yes, I would love to have Swift framework for Firebase on
         | server, not only for iOS. Its atrocity to write the server
         | logic in NodeJS after making the user App in Swift.
         | 
         | Every time I switch from Swift to JS I deeply appreciate the
         | beauty of Swift. On swift I do much less silly mistakes where
         | the NodeJS code feels like up and running by some miracle and
         | everything can collapse due to something I did but could't
         | catch it before something horrible happens.
        
           | wlesieutre wrote:
           | Have you looked at Vapor?
        
             | mrtksn wrote:
             | Yep, Vapor looks nice but I want to use Firestore, so I
             | need a reliable integration of Firestore with Vapor and
             | there isn't one. I rely on listening for live data entries
             | and updates on the Firestore, processing those and putting
             | them back.
             | 
             | I'm doing it on NodeJS currently and I hate it. I used to
             | like JS but Swift showed me that there's more, there's
             | beauty in the world.
             | 
             | The only thing I miss on Swift is async/await and that's
             | coming.
        
           | adevx wrote:
           | Could it be that you have more knowledge of Swift and don't
           | care/want to invest in understanding Node.js/JavaScript? I
           | really enjoy writing backend code in TypeScript for Node.js.
           | But loath having to write Swift code for iOS. I know a big
           | part is my unwillingness to invest time in Apple's ecosystem
           | and properly learn Swift.
        
             | mrtksn wrote:
             | I used to work with web technologies but I got overwhelmed
             | with the complexity of the tooling.
             | 
             | For example, TypeScript is nice but it's not a "real"
             | language in a sense that you can write something in it and
             | expect it to work when it's fed into the compiler or
             | interpreter. To use it, you need to set up an environment
             | where all the moving parts are working in harmony and the
             | code you write in Typescript is transcribed into the actual
             | code that NodeJS and the browser will understand and that
             | is JS. The debugging of unusual bugs and doing something
             | non-conventional instantly becomes multiple times harder
             | because you have layers and layer over the actual stuff
             | that is executed which means you loose the browsers or
             | runtimes debug capabilities since those would rarely tell
             | anything useful about the pre-transcription code.
             | 
             | Sometime you have a library where someone a few years back
             | tried to do something similar to your idea but never had a
             | complete solution and stopped working on it and when you
             | try to benefit from this work to build on top of it, you
             | find out that you need to modify you working environment to
             | support some spacial case of legacy code. You do that and
             | it all falls apart, now you must choose to try to fix it or
             | give up and restore your setup. It's just horrible.
             | 
             | The greatest thing about working with Swift on iOS is that
             | starting something new is as easy as creating a new Word
             | document so you can start working on your idea right away
             | without worrying about the tooling and see where it goes.
             | On the JS world, I used to be exhausted and loosing all my
             | motivation by the time I have my environment ready to go.
             | Templates and project starters tend to be no good because
             | they are all opinionated that you must be creating a list
             | or a blog app. They all need extra work for a good clean
             | start and that clean start often is too customised to be
             | used as a general use case template.
             | 
             | There are so many solutions for the same problem in the Web
             | Tech world, each comes with its own shortcomings and taht's
             | how someone else creates the next framework. On iOS it's
             | easy, the answer is UIKit and SwiftUI if you feel more
             | adventurous and bleeding edge.
        
           | simonbarker87 wrote:
           | It doesn't solve the issue of JS but NestJS feels like a more
           | robust way to make a server in JS - and the docs are great.
        
             | moocowtruck wrote:
             | nestjs is an atrocity that should not have been wrought
             | upon the js/ts community..
        
               | bartvk wrote:
               | Please check the guidelines via the link at the bottom.
               | What you're doing is called shallow dismissal.
        
               | simonbarker87 wrote:
               | Sorry you feel that way, it feels like angular for the
               | backend to me which is nice, it follows logical patterns,
               | loading modules allows us to configure micro services
               | from a single mono repo, decorators are nice, encourages
               | use of RxJS which is great and it still allows access to
               | the underlying express app if needs be.
               | 
               | Might not be to everyone's tastes but it feels like a
               | solid tool to me.
        
             | mrtksn wrote:
             | Thanks, I don't really need large framework for full blown
             | app though.
        
               | jiofih wrote:
               | Swift and UIKit are already a much larger API than any
               | node framework in existence.
        
               | mrtksn wrote:
               | They have good defaults and Xcode is a good IDE with full
               | support of these frameworks, Swift is a good language
               | with sensible conventions so you don't really need to
               | learn that much.
               | 
               | When you don't know something you can start typing
               | keywords and see what Xcode autocompletes for and read
               | the explanation about that function from the
               | documentation that is provided to you right there. It's
               | not perfect, sometimes there's no documentation but it's
               | not a big deal as you can look into source files or try
               | it out to see how it behaves.
        
         | adsharma wrote:
         | Python type system is improving. Another point in this ease of
         | use vs type safety spectrum is python -> type safe language
         | static compilation.
         | 
         | I don't know which one is the best. So I've been experimenting
         | with several.
         | 
         | https://github.com/adsharma/py2many
        
           | dagmx wrote:
           | Is the Python type system improving or is it the type
           | annotation system that's improving? Big difference between
           | the two.
        
             | adsharma wrote:
             | The lack of algebraic data types is what python gets
             | criticisms about. That situation is improving, but I would
             | have preferred match to be an expression rather than a
             | statement.
             | 
             | What else is missing in the python3 type system?
        
         | xiaodai wrote:
         | Example of features you miss?
        
       | akmittal wrote:
       | Swift for tensorflow never made any sense. I remember only reason
       | it started was one main developer on team liked swift.
       | 
       | Tensorflow for nodejs makes much more sense, node community is
       | lit bigger and not sponsored by big corp.
        
         | swagonomixxx wrote:
         | Well, there is already Tensorflow.js [0].
         | 
         | [0]: https://www.tensorflow.org/js.
        
       | nafizh wrote:
       | It seems this was written on the wall once Chris Lattner left the
       | project. It's a shame since this could have been a real
       | breakthrough in machine learning combining performance and ease.
       | But the success of JAX means Google probably doesn't feel the
       | investment is worth it.
        
         | swagonomixxx wrote:
         | I mean, I'm fairly certain that this project is the reason
         | Chris Lattner joined Google; it was somehow in his contract
         | somewhere. He just found a way to shoehorn himself in there and
         | make sure that the language that he designed (Swift) is used
         | for something more than just iOS apps. Nothing against him at
         | all, I think most folks would want to push their brainchild as
         | far as they can, but that's just what it looks like from the
         | outside.
         | 
         | Since ML has been a large rage for the last few years, and
         | TensorFlow is one of the most popular frameworks that ML
         | engineers use (although I use and prefer PyTorch) it seemed
         | like the "perfect storm" for Swift to break out of the iOS
         | mold.
         | 
         | But it was clear to me (and many of my peers) that while this
         | was an interesting project, the need for it is really low, and
         | that nobody is going to throw away years of experience with
         | libraries like TensorFlow and PyTorch (via Python) in the trash
         | just so we can use Swift, which for me isn't really that crazy
         | of a language as all the hype makes it out to be, and I'm sure
         | the same is true for many ML devs and researchers alike.
        
       | Waterluvian wrote:
       | Stupid question. Why is it not "TensorFlow for Swift"?
        
       | msie wrote:
       | Swift compared to Python was getting way too complex. Why would
       | anyone learn Swift for data science when Python will do?
        
         | geophile wrote:
         | I am not involved in data science at all, I have extensive
         | Python experience, and did some work in Swift for a couple of
         | years.
         | 
         | Putting aside the Python 2/3 transition, I think that Python
         | does a good job of introducing new features carefully. For
         | example, the "X if P else Y" syntax. While I prefer the C
         | approach, the Python approach works fine and fits in the
         | language well.
         | 
         | I had a mild dislike of Swift from the beginning, but it was a
         | lot more palatable than working in Objective C. What bothered
         | me was all the gimmicky syntax. Sometimes "let" unwraps an
         | optional, sometimes it doesn't. ! and ? seem badly overused.
         | ARC doesn't compose well with other language features. There is
         | a ton of syntax whose only purpose appears to be to stuff logic
         | into and around assignment syntax. Then it got worse, with
         | major language changes across versions 3-5, each one
         | introducing lots of new concepts and syntax.
         | 
         | And of course, Swift is a proprietary Apple language, (in the
         | same way that C# is tied to Microsoft). When I stopped doing
         | iOS programming, I very happily but Swift behind me.
         | 
         | I saw the TensorFlow additions to the language and thought the
         | whole thing bizarre. I put my reaction down to my lack of
         | involvement with data science. Maybe I'd appreciate those
         | changes more if I knew more about data science. But I also
         | remember thinking that it was quite odd to make such extensive
         | additions to the language to support this one application area,
         | which was very, very far from the language's sweet spot.
        
         | smabie wrote:
         | I mean, python doesn't do it very well at all.
         | 
         | that's not say I would use Swift, but I'd choose Julia any day
         | over Python.
        
         | jtdev wrote:
         | Agreed, Swift has an obsession with functional programming that
         | is just so off-putting to me and I suspect many others.
        
           | astrange wrote:
           | The type system is kind of sort of vaguely ML/Haskell-like
           | despite looking like C. Is that what you mean?
           | 
           | It certainly isn't a functional programming language. It's
           | intentionally imperative.
        
             | jtdev wrote:
             | Yes, Swift is fundamentally imperative... but it has
             | functional concepts littered throughout in ways that are
             | obtuse, unhelpful, and unavoidable.
        
           | jshier wrote:
           | This is hilariously wrong. Actual fans of functional
           | programming would disagree with you, as there are several
           | things Swift doesn't do to enable pure functional
           | programming.
        
       | jtdev wrote:
       | Not surprised that this project had trouble gaining traction
       | given Swift's obsession with shoe-horning functional concepts
       | into every crevice of the language, e.g., I can't get through a
       | single Swift tutorial without having to get into closures.
        
         | rudedogg wrote:
         | For a short time, maybe around Swift 2/3, functional features
         | got some focus, but the language moved in a different
         | direction.
         | 
         | The only functional features I can think of are closures,
         | map/reduce/etc., and value types.
         | 
         | I have a lot of criticisms about Swift but this one seems
         | weird/outdated to me.
        
         | jshier wrote:
         | Closures aren't really a functional concept, they can just be
         | used functionally. You'll see a big reduction in closure usage
         | when the new concurrency features ship.
        
       | pixel_tracing wrote:
       | Wow this project had so much potential going for it. What does
       | Chris Lattner have to say about this?
        
         | throwaway1777 wrote:
         | He left Google a while ago, so I doubt he's surprised.
        
         | the_damn_nava wrote:
         | Same questions here, Jeez.
         | 
         | Would love to know Jeremy Howard's too.
        
           | BadInformatics wrote:
           | I remember an interview [1] with him a few months back where
           | he seemed to have stopped pursuing it after Chris Lattner's
           | departure. Sylvain Gugger, who was pretty involved in fastai
           | and prototyping a Swift port, also left to join Huggingface a
           | while ago.
           | 
           | [1] https://www.youtube.com/watch?v=t2V2kf2gNnI
        
           | fuddle wrote:
           | I immediately thought of Jeremy Howard, he has mentioned this
           | project in a couple of interviews.
        
             | Tenoke wrote:
             | He didn't just mentioned it, he thought the courses of an
             | iteration of fast.ai in Swift as it was allegedly the way
             | forward and more useful for students (paraphrasing).
        
       | FartyMcFarter wrote:
       | Looks like Chris Lattner's "exec commitment and strong roadmap"
       | comment was not that meaningful:
       | 
       | https://twitter.com/clattner_llvm/status/1222032740897284097...
        
       | tokipin wrote:
       | I think people are deceived by Swift's syntax. It looks simple by
       | syntactic sugar, but it's not actually any more approachable than
       | other systems languages like C# or Java. Given that a lot of work
       | in ML is exploratory, it doesn't seem like a good fit compared to
       | a hackable scripting language like Python. I would bet against
       | SwiftUI for similar reasons.
        
       | iamaziz wrote:
       | Long live Python!
        
       | BadInformatics wrote:
       | Looks like @throw6606 was right:
       | https://news.ycombinator.com/item?id=24533937. Does anyone know
       | the status of https://ai.facebook.com/blog/paving-the-way-for-
       | software-20-...?
        
         | willxinc wrote:
         | Looks like your link was cut off:
         | https://ai.facebook.com/blog/paving-the-way-for-software-20-...
         | 
         | (Missing the -with-kotlin at the end)
        
         | xenihn wrote:
         | Reading that thread, I'm always so confused by HN posters who
         | ask for evidence that would be impossible to provide without
         | self-incriminating. Are they asking disingenuously, or are they
         | just clueless?
        
       | xiaodai wrote:
       | Knew this was gonna happen. Doomed from the beginning. Should ve
       | chosen Julia
        
       | bsaul wrote:
       | Swift now being a 100% Apple-sponsored & owned project again
       | makes me a bit nervous.
       | 
       | Anyone knows if chris latner is at least using swift in his new
       | company ? I have the feeling swift never really worked in the
       | server side, data science is now officially a failure, and all
       | that is left is now a very niche market of 100% native mobile
       | development.
       | 
       | I love this language, but i'm eager to see it handled by a proper
       | foundation with big names behind, as it's a really great
       | language.
        
         | MrPowers wrote:
         | It seemed like Swift looked like a promising language for data
         | science a few years ago. I'm not familiar with Swift. Can
         | anyone provide additional context as to why the language seemed
         | promising for data?
        
           | SatvikBeri wrote:
           | I disagree with the pitch, but the pitch I heard when this
           | was announced was:
           | 
           | * Swift lets you write code that's as concise as Python while
           | still being fast, and therefore allows you to get away from
           | the problem of having to write ML code in multiple languages
           | (TensorFlow is actually mostly C++, which makes it difficult
           | for a Python developer to debug.)
           | 
           | * That the lack of static typing is a major pain point in
           | Python, and Swift's type system would help write code with
           | fewer errors.
        
             | ljm wrote:
             | > Swift lets you write code that's as concise as Python
             | while still being fast,
             | 
             | > (TensorFlow is actually mostly C++, which makes it
             | difficult for a Python developer to debug.)
             | 
             | To be honest, if I had to choose between Python and Swift
             | I'd still choose Python. Swift is a nice evolution from
             | Obj-C and all but it is nowhere near as simple as Python,
             | or Ruby, or PHP, or Javascript, or Kotlin. And Apple's
             | documentation of these things is pretty woeful at the best
             | of times lately.
             | 
             | I also disagree with the pitch, is what I'm saying.
        
               | aplummer wrote:
               | To say that Swift is "nowhere near as simple" as
               | Javascript or Kotlin is a very strange opinion to me, and
               | I have experience with all 3.
               | 
               | Simple as in syntax, memory management, or in what
               | regard? I can't think of a single example where Swift is
               | not equivalent or better. Do you have the same criticism
               | of Scala?
        
               | ljm wrote:
               | I do. Scala is insanely complicated once you inherit a
               | codebase full of implicits and custom operators.
        
               | aplummer wrote:
               | If misuse of a language feature is a criticism of the
               | language, I have bad news about javascript.
        
               | skohan wrote:
               | I think considering swift as merely an evolution of obj-c
               | is a drastic underestimation. Swift is a very nice
               | language in its own right. Having spent time with most of
               | the languages you've mentioned, while the learning curve
               | might be a bit steeper, I think Swift is much easier to
               | manage a medium or large project with than any of those
               | options with the exception of maybe Kotlin, and
               | personally I find some of the design choices a bit better
               | in Swift than in Kotlin.
        
             | [deleted]
        
         | Grustaf wrote:
         | > Swift now being a 100% Apple-sponsored & owned project again
         | makes me a bit nervous.
         | 
         | Are you nervous about a language that is designed and supported
         | by the largest and most successful company in the world?
         | 
         | > very niche market of 100% native mobile development
         | 
         | That "niche" is at least 1.5 billion devices. Devices bought by
         | the 1.5 billion richest people in the world because they like
         | them, not because their employers forced them to. Seems like a
         | pretty ideal "niche".
         | 
         | But I wouldn't agree that "swift never worked server side".
         | I've built several backends in Vapor and it's amazing.
        
           | forgot-my-pw wrote:
           | The Uber engineering disaster story from 2 months ago seems
           | to suggest that Apple lacks dogfooding Swift internally:
           | https://news.ycombinator.com/item?id=25373462
        
             | azinman2 wrote:
             | iOS doesn't come as 100mb cellular downloads, so that isn't
             | really the same kind of issue. There are many reports about
             | showing how it's slowly coming into wider spread use at
             | Apple.
             | 
             | For me the lesson is that just because a new
             | technology/language is out doesn't mean you should jump on
             | it. Things need time to mature, and if you're Uber, you
             | can't risk having half of your income rely on something
             | new. Compilers, linkers, and programming languages (and
             | databases and OS kernels) take years to mature. Hell, just
             | earlier this week was a post about diagnosing a 21 year old
             | bug in the Linux networking stack for rsync of all things.
             | 
             | I'm quite shocked that enough experienced people felt that
             | level of confidence. In earlier versions of Swift I
             | personally experience slow compiles and that was on a not
             | terribly large code base -- no where near Uber. That alone
             | should have been a big clue of the state of things.
        
             | Grustaf wrote:
             | Apple is using Swift everywhere. Including SwiftUI.
        
           | iujjkfjdkkdkf wrote:
           | > Are you nervous about a language that is designed and
           | supported by the largest and most successful company in the
           | world?
           | 
           | Yes, I am. The concentration of power between facebook and
           | google for ML frameworks is worrying enough, but to their
           | credit, they have so far been very open and collaborative and
           | are giving a lot back to the ML community in terms of
           | research
           | 
           | Apple on the other hand is the dictionary definition of
           | walled garden. Their controlling and insular nature has been
           | the reason for their success, and power to them, but for
           | something as fast moving and community driven as machine
           | learning, I would be very wary if relying on them for any
           | part of it.
        
             | samatman wrote:
             | This is so bizarre to me.
             | 
             | What possible future are you trying to ward off here? Swift
             | is an open-source project, it's self-evidently in Apple's
             | interest for as many people as possible to use it.
             | 
             | Be concrete, please. What scares you about using an open-
             | source language which is mostly developed by Apple? Do you
             | have the same paranoia about clang? If not, why not, what's
             | the difference?
        
             | jamil7 wrote:
             | I think there is too much focus on Swift as a general
             | purpose language. I really enjoy working in it and it's
             | great at what it does, does it have to be deployed in every
             | domain?
        
         | jdmcd wrote:
         | > I have the feeling swift never really worked in the server
         | side
         | 
         | Vapor [1] is wonderful to work with and has a large user base.
         | Apple is also putting significant resources into Server Side
         | Swift with Swift NIO [2]. Lots of really cool stuff happening
         | in that ecosystem
         | 
         | [1]: https://vapor.codes [2]: https://github.com/apple/swift-
         | nio
        
         | jimbokun wrote:
         | > is now a very niche market of 100% native mobile development.
         | 
         | Which just happens to be one of the biggest development niches
         | in the world.
        
           | bsaul wrote:
           | not really.. games are made with 3D tools (unity, etc), Form-
           | based "data oriented" apps are made with cross-plateform
           | tools (react native, xamarin, flutter, etc).
           | 
           | The only things left are a very small percentage, and add to
           | that the fact that you either 1/ have the budget for 2
           | development teams, or 2/ afford to skip the other "half" of
           | the market staying iOS only.
        
         | Hamuko wrote:
         | > _a very niche market of 100% native mobile development_
         | 
         | I mean, it also works for native desktop development. And is
         | there really an issue with that? Objective-C basically had no
         | reason to exist beyond iOS/Mac programming and at least now we
         | don't have god damn [myString stringByAppendingString:@"another
         | string"].
        
           | ChrisMarshallNY wrote:
           | I suspect that we'll be seeing ObjC for a long time, as the
           | lower-level system language for Apple devices. I know that
           | Apple still uses it for much of their system programming. No
           | idea if that's by choice, or legacy (possibly both).
           | 
           | I was looking at the upcoming async/await stuff for Swift,
           | and it's still not comparable to some of the lower-level
           | threading systems.
           | 
           | That said, I have not done system programming for a long time
           | (unless you count drivers). I think Swift is an almost ideal
           | application-level language.
           | 
           | I'm not especially bothered by it being Apple-only. It works
           | on all Apple-supported hardware, has a new application
           | toolkit coming into play (SwiftUI), and has a _very_ active
           | development community. Spend some time on the Swift
           | discussion forum to see some pretty heady discussions.
        
             | raydev wrote:
             | We'll see it in legacy code for a long time, sure, but
             | there's every indication that all new Apple frameworks are
             | being written in Swift. Anyone starting a new project in
             | Objective-C is in the minority.
        
               | lapcatsoftware wrote:
               | > there's every indication that all new Apple frameworks
               | are being written in Swift
               | 
               | There's no such indication. "The number of binaries using
               | Objective-C is still growing with each iOS release."
               | https://blog.timac.org/2020/1019-evolution-of-the-
               | programmin...
        
               | mplewis wrote:
               | The number of binaries of _all languages_ in iOS (except
               | C) increases year over year, so that doesn 't really mean
               | anything. It's more telling that the number of Swift
               | binaries _in proportion to_ the number of Objective-C
               | binaries is increasing year over year.
        
               | lapcatsoftware wrote:
               | > The number of binaries of _all languages_ in iOS
               | increases year over year
               | 
               | This is not true either. "the number of binaries written
               | entirely in C is now stagnating"
               | 
               | > the number of Swift binaries in proportion to the
               | number of Objective-C binaries is increasing year over
               | year
               | 
               | This is true. But it's a much weaker claim than the one I
               | was replying to.
               | 
               | In other words, the reports of Objective-C's death have
               | been greatly exaggerated. ;-)
        
               | copascetic wrote:
               | If you look closely at how that data was collected, what
               | it's actually measuring is whether a binary links against
               | the ObjC runtime library, which will be the case if a
               | binary transitively depends on any ObjC framework, so
               | even if all _new_ frameworks and executables were written
               | in Swift, we would still expect to see the number
               | presented in that post to continue to grow until most or
               | all of the important frameworks using ObjC are rewritten
               | entirely in Swift. I don 't think this data is sufficient
               | to say one way or the other to what degree that is
               | occurring.
        
               | lapcatsoftware wrote:
               | Fair criticism. I'm not sure why the author didn't try to
               | detect the use of objc_msg functions, for example. So the
               | ObjC binaries may be overcounted a bit.
               | 
               | Still, the test for Swift binaries seems accurate, and if
               | you look at iOS 13.1 vs 14.0, for example, according to
               | the chart there was an increase of 157 Swift binaries and
               | 446 Objective-C binaries. If we assume there are 157
               | "false positives" in the ObjC binaries, that's still an
               | increase of 289 ObjC binaries that don't use Swift at
               | all.
        
               | rjmccall wrote:
               | Swift frameworks that use ObjC system libraries will
               | still use objc_msgSend.
        
               | lapcatsoftware wrote:
               | D'oh, right. A good test might be difficult. In the
               | absence of that, I guess the safe bet is just to count
               | every Swift binary as a false positive for ObjC, though
               | that's not quite fair, since you can mix the two in the
               | same code base.
        
             | stephencanon wrote:
             | I'm a little bit confused by this comment. Obj-C is, in a
             | lot of ways, the _higher_ level language on Apple systems
             | now, and most of what I would consider "systems"
             | programming on macOS and iOS is in C, assembly, and C++.
        
               | ChrisMarshallNY wrote:
               | Probably right, but ObjC inherits C (and ObjC++ inherits
               | C++), so you get all that low-level goodness, as well as
               | the SmallTalk stuff.
        
               | stephencanon wrote:
               | You can more or less write C in Swift as well, though.
               | Some pointer operations get more verbose, but the
               | standard library also makes some stuff easier. The only
               | thing that's really missing is a nice fixed-size array
               | syntax (they get imported from C as tuples, which is not
               | ideal).
        
             | dwaite wrote:
             | I suspect we will hit an inflection point where Objc is
             | bridged to swift code rather than the other way around - no
             | reason to let Swift be the one taking the interoperability
             | hit.
             | 
             | There are already controls in Big Sur and iOS 14 (Color
             | Picker) which are SwiftUI and bridged to UIKit/AppKit.
        
         | echelon wrote:
         | Swift never had a good outside-Apple story. Apple isn't the
         | sort of company that jumps at opening up its platform or
         | tooling for non-Apple uses.
         | 
         | There's nothing wrong with this. Swift can be the best language
         | for the Apple platform and find lots of success in that.
        
           | saagarjha wrote:
           | LLVM? WebKit?
        
             | coolspot wrote:
             | WebKit started as KDE's KHTML/KJS and LLVM started as a
             | research project at Univeristy of Illinois.
             | 
             | Apple took the code and extended it, they had little choice
             | for the source code license.
             | 
             | https://en.wikipedia.org/wiki/Webkit
             | 
             | https://en.wikipedia.org/wiki/LLVM
        
             | tsegratis wrote:
             | Neither originated with Apple, though I agree Apple did a
             | lot to push them and make them what they are
        
               | favorited wrote:
               | OK, Clang then.
        
               | coolspot wrote:
               | It's started in the Univerity of Illinois, and still
               | under their source code license.
        
               | favorited wrote:
               | No, that's where LLVM started. Clang was started from
               | scratch at Apple to replace the GCC-LLVM frontend.
               | 
               | https://llvm.org/devmtg/2007-05/09-Naroff-CFE.pdf
        
             | echelon wrote:
             | WebKit got forked into Blink, which is no longer under
             | Apple leadership. Brave, Chromium, and the other major
             | KHTML-derived browsers adopted this.
             | 
             | I can think of dozens of examples from Microsoft and
             | Google, which are more "outside ecosystem" inclusive as a
             | means of gaining mindshare. TypeScript, Golang, Visual
             | Studio Code, protobuf, gRPC, ...
             | 
             | Other companies think of the things living outside their
             | moat more flexibly, or their moats are less rigid. That's
             | not to say that Apple's approach is wrong (look at their
             | market cap!), but it has different consequences in terms of
             | the open source code mindshare they cultivate.
        
               | saagarjha wrote:
               | In what sense? 90% of the commits come from Apple and
               | they drive most of the technical direction...
        
         | johnnycerberus wrote:
         | Chris Lattner's new company is mostly using Scala and C as far
         | as I know. [1]
         | 
         | [1] https://github.com/sifive
        
         | Zababa wrote:
         | > I have the feeling swift never really worked in the server
         | side
         | 
         | IBM dropped Kitura more than a year ago [1]. There's Vapor [2]
         | which seems popular, but I don't know how much.
         | 
         | [1]: https://forums.swift.org/t/december-12th-2019/31735
         | 
         | [2]: https://github.com/vapor/vapor
        
           | [deleted]
        
         | grey-area wrote:
         | Don't worry, in a few years Apple will rewrite all their SDKs
         | again in a different language.
         | 
         | The churn may not be a deliberate strategy but it is certainly
         | very effective in locking developers in.
        
           | jaegerpicker wrote:
           | Right... Because Apple has done that so many times... oh wait
           | they have done it exactly once since re-launching after
           | acquiring NeXT. IT was Obj-C and now it's Swift except they
           | actively support Obj-C still, so I'm not sure what you are
           | talking about?
        
           | geoffpado wrote:
           | Apple churns languages every few years? They kept Objective-C
           | around for _13 years_ before introducing Swift, and that was
           | a language that was widely disliked by people from outside
           | the Apple ecosystem. You can still use it for the vast
           | majority of OS features today[0], nearly 20 years after it
           | was first used on Apple platforms. Heck, Obj-C is used for
           | Mac /iOS development because that was the preferred language
           | for Mac OS X's immediate ancestor, NextSTEP, which means that
           | it's been a part of the same development ecosystem for about
           | 30 years. If this is churn, I think I'm okay with it.
           | 
           | [0] As far as I know, the only system feature that requires
           | Swift is iOS 14/macOS 11's new widgets, which must be written
           | in SwiftUI.
        
           | [deleted]
        
         | mottiden wrote:
         | What do you mean by data science being a failure?
        
           | nt591 wrote:
           | Presumably "Swift for data science" is a failure, not the
           | idea of data science as a whole
        
       | jhatemyjob wrote:
       | Not surprising. Swift isn't a very good language. Objective-C is
       | better in a lot of ways.
        
       | adamnemecek wrote:
       | This was a matter of time.
        
         | sedatk wrote:
         | Why?
        
           | adamnemecek wrote:
           | No one was using it.
        
       | sedatk wrote:
       | I don't see any justification for the shut down. Does anyone know
       | why?
        
         | marcinzm wrote:
         | It was Chris Lattner's baby and when he left Google no one
         | cared to push it forward. The Google vs. Apple fights over ads
         | and privacy probably didn't help the case.
        
         | therealmarv wrote:
         | maybe Apple's own replacements with AI components?
        
       ___________________________________________________________________
       (page generated 2021-02-12 23:00 UTC)