[HN Gopher] Migrating from Supabase
       ___________________________________________________________________
        
       Migrating from Supabase
        
       Author : stevekrouse
       Score  : 210 points
       Date   : 2023-05-19 17:57 UTC (5 hours ago)
        
 (HTM) web link (blog.val.town)
 (TXT) w3m dump (blog.val.town)
        
       | joshghent wrote:
       | Echo all the words from the author here, and kudos for being
       | transparent.
       | 
       | I've faced exactly the same problems building my new product.
       | But, on the other hand, Supabase was incredibly easy to setup,
       | and meant I could worry about infrastructure later.
       | 
       | Pros and cons like with everything, and always wise to understand
       | the flaws of the tech you're using.
        
       | neximo64 wrote:
       | Can someone explain a bit better what the issues are. What
       | exactly are the issues with migration if you use an SQL script to
       | do the migration instead of the supabase interface?
        
         | kiwicopple wrote:
         | That's definitely our recommendation beyond prototyping. I
         | shared more thoughts here:
         | https://news.ycombinator.com/item?id=36006754
         | 
         | For developers who have worked with databases before, SQL
         | migrations might be obvious. But for many of our audience it's
         | not. We'll adapt the interface to make this pattern more front-
         | and-center. We also need to improve our CLI to catch up with
         | other migrations tools because a lot of our audience haven't
         | used established tools before (flyway, sqitch, alembic, etc)
        
       | doctorpangloss wrote:
       | > The CLI manages the Supabase stack locally: Postgres, gotrue, a
       | realtime server, the storage API, an API gateway, an image
       | resizing proxy, a restful API for managing Postgres, the Studio
       | web interface, an edge runtime, a logging system, and more - a
       | total of 11 Docker containers connected together.
       | 
       | Can Supabase author a set of Kubernetes manifests similar to what
       | they run in production, and perhaps distribute those?
        
         | ahachete wrote:
         | This is not from Supabase, but as a community contribution. See
         | upthread [1]: "at StackGres we have built a Runbook [2] and
         | companion blog post [3] to help you run Supabase on
         | Kubernetes."
         | 
         | [1]: https://news.ycombinator.com/item?id=36006308
         | 
         | [2]: https://stackgres.io/doc/latest/runbooks/supabase-
         | stackgres/
         | 
         | [3]: https://stackgres.io/blog/running-supabase-on-top-of-
         | stackgr...
        
       | Ethan_Mick wrote:
       | How do people on HN like Row Level Security? Is it a better way
       | to handle multi-tenant in a cloud SaaS app vs `WHERE` clauses in
       | SQL? Worse? Nicer in theory but less maintainable in practice?
       | 
       | fwiw, Prisma has a guide on how to do RLS with it's client. While
       | the original issue[0] remains open they have example code[1] with
       | the client using client extensions[2]. I was going to try it out
       | and see how it felt.
       | 
       | [0]: https://github.com/prisma/prisma/issues/12735
       | 
       | [1]: https://github.com/prisma/prisma-client-
       | extensions/blob/main...
       | 
       | [2]: https://www.prisma.io/docs/concepts/components/prisma-
       | client...
        
         | doctor_eval wrote:
         | I use both for defence in depth. The SQL always includes the
         | tenant ID, but I add RLS to ensure mistakes are not made. It
         | can happen both ways: forget to include the tenant in the SQL,
         | or disable RLS for the role used in some edge case. For
         | multitenancy, I think it's absolutely critical to have cross-
         | tenancy tests with RLS disabled.
         | 
         | One of the things I think is important is to make the RLS query
         | is super efficient - make the policy function STABLE and avoid
         | database lookups, get the context from settings, etc.
         | 
         | RLS is pretty great as a backstop, but I found Supabase over-
         | reliant on RLS for security, when other RBACs are available in
         | regular PG. I can't remember the details now.
         | 
         | I've found RLS is great with Postgraphile which uses a similar
         | system to Supabase but is a bit more flexible.
        
         | atonse wrote:
         | It is highly appealing to have that defense in depth. However,
         | when building a prototype or a product, not having experience
         | in it causes me to worry that we will end up being stuck with a
         | choice where it's very hard to pull ourselves out of.
         | 
         | So instead we've stuck to having that filtering logic in the
         | application side. The main concern is how user auth/etc works
         | in Postgres. (lack of knowledge, not lack of trust).
         | 
         | Because we also have complex filtering like, "let me see all
         | the people in my team if I have this role, but if i'm a public
         | user, only show this person" etc
        
         | crooked-v wrote:
         | The main issue we've had with it is that it's just plain slow
         | for a lot of use cases, because Postgres will check the
         | security for all rows _before_ filtering on the joins, doing
         | anything with WHERE clauses, doing anything to even tentatively
         | take LIMIT into account, etc.
         | 
         | Imagine a 1-million-row table and a query with `WHERE x=y` that
         | should result in about 100 rows. Postres will do RLS checks on
         | the full 1 million rows before the WHERE clause is involved at
         | all.
        
           | steve-chavez wrote:
           | > because Postgres will check the security for all rows
           | before filtering on the joins, doing anything with WHERE
           | clauses, doing anything to even tentatively take LIMIT into
           | account, etc.
           | 
           | Note that the above only happens for non-inlinable[1]
           | functions used inside RLS policies.
           | 
           | Going from what you mentioned below, it seems your main
           | problem are SECURITY DEFINER functions, which aren't
           | inlinable.
           | 
           | It's possible to avoid using SECURITY DEFINER, but that's
           | highly application-specific.
           | 
           | [1]:https://wiki.postgresql.org/wiki/Inlining_of_SQL_function
           | s#I...
        
           | kiwicopple wrote:
           | With PostgREST you can use the pre-fetch method to solve
           | this: https://postgrest.org/en/stable/references/transactions
           | .html...
           | 
           | You can use that to inject your ACL/permissions into a
           | setting - set_config('permissions', '{"allowed":true}'). Then
           | in your RLS rules you can pluck them out -
           | current_setting('permissions'::jsonb).
           | 
           | This should make your RLS faster than most other options, in
           | theory, because of data co-location
        
             | crooked-v wrote:
             | That seems deeply impractical for a lot of cases. If user A
             | has access to 80,000 of those 1,000,000 rows in a way
             | that's determined from another table rather than as part of
             | in-row metadata, doing the lookups to JSONify 80,000 UUIDs
             | as an array to pass along like that really isn't going to
             | help beyond cutting down a 20-second query response to a
             | still-unacceptable 7-second query response [1] just to get
             | 100 rows back.
             | 
             | [1]: Both numbers from our own testing, where the 7 seconds
             | is the best we've been able to make it by using a SECURITY
             | DEFINER function in a `this_thing_id IN (SELECT
             | allowed_thing_ids())` style, which should have basically
             | the same result in performance terms as separately doing
             | the lookup with pre-fetching, because it's still checking
             | the IN clause for 1,000,000 rows before doing anything
             | else.
        
               | kiwicopple wrote:
               | You certainly wouldn't want to inject 80K UUIDs. I'm not
               | sure I understand the structure you're using but if you
               | want to send me some details (email is in my profile) I'd
               | like to dig into it
               | 
               | As an aside, this is a good read on the topic:
               | https://cazzer.medium.com/designing-the-most-performant-
               | row-...
        
               | crooked-v wrote:
               | At its core it's a pretty simple multi-tenancy
               | arrangement. Think something like this:
               | tenants (id, updated_at)         tenants_users (id,
               | updated_at, tenant_id, user_id)         products (id,
               | updated_at, name, tenant_id)         product_variants
               | (id, updated_at, product_id, name)
               | 
               | One of the tenants views a page that does a simple
               | `SELECT * FROM products ORDER BY updated_at LIMIT 100`.
               | The RLS checks have to reference `products` -> `tenants`
               | -> `tenant_users`, but because of how Postgres does it,
               | every row in products will be checked no matter what you
               | do. (Putting a WHERE clause on the initial query to limit
               | based on tenant or user is pointless, because it'll do
               | the RLS checks before the WHERE clause is applied.) Joins
               | in RLS policies are awful for performance, so your best
               | bet is an IN clause with the cached subquery function, in
               | which case it's still then got the overhead of getting
               | the big blob of IDs and then checking it against every
               | row in `products`.
        
               | jgraettinger1 wrote:
               | Do you have an index on `updated_at` ?
        
           | jgraettinger1 wrote:
           | I'm having a hard time relating to this comment given our own
           | experience.
           | 
           | We use RLS extensively with PostgREST implementing much of
           | our API. It _absolutely_ uses WHERE clauses and those are
           | evaluated / indexes consulted _before_ RLS is applied.
           | Anything else would be madness.
        
         | esafak wrote:
         | I use a database that supports unlimited databases, tables, and
         | views. Makes it easy to separate tenants.
        
         | notyograndma wrote:
         | Hi - We're an analytics solution for a specific vertical, so
         | this is probably not appropriate for everyone but - what we did
         | was create partitioned data tables that are named using a hash
         | of the user UUID and other context to create the partition
         | table name upon provisioning data tables for the user. The
         | parent table is never accessed directly. We're using Supabase,
         | but we don't use Supabase's libraries to operate this.
        
       | throwawaymaths wrote:
       | Amazed they used scp instead of rsync
        
       | ilrwbwrkhv wrote:
       | I also had the same experience with Supabase.
       | 
       | Even though it looks like a great product initially, it has a lot
       | or errors and bugs when you are trying to actually build
       | something more robust than a toy app.
       | 
       | Local development is a massive pain with random bugs.
       | 
       | The response time of the database also varies all over the place.
       | 
       | But the most important problem that we faced, was having so much
       | of application logic in the database.
       | 
       | Row level security is their "foundational piece", but there is a
       | reason why we moved away from database functions and application
       | logic in database over a decade ago: that stuff in
       | unmaintainable.
       | 
       | There is also really poor support and at the end of the day, the
       | whole platform felt like a hack.
       | 
       | I think now, for most apps with up to 500_000 users (with 10_000
       | concurrent realtime connections) PocketBase is the best PaaS out
       | there having tested a bunch of them.
       | 
       | A single deployable binary which PocketBase provides is a breath
       | of fresh air.
       | 
       | Anything more than that, just directly being on top of bare metal
       | or AWS / GCP is much better.
        
         | another_story wrote:
         | Could you be more specific what is difficult about local
         | development? I've used it locally and had little difficulty.
        
         | jgraettinger1 wrote:
         | > Row level security is their "foundational piece", but there
         | is a reason why we moved away from database functions and
         | application logic in database over a decade ago: that stuff in
         | unmaintainable.
         | 
         | Funny. In my experience, application-level authorization checks
         | are very error-prone, easy to accidentally omit, and difficult
         | to audit for correctness."Unmaintainable", I suppose.
         | 
         | Whereas RLS gives you an understandable authorization policy
         | with a baseline assurance that you're not accidentally leaking
         | records you shouldn't be.
        
         | dingledork69 wrote:
         | I don't use supabase, but am a big postgres fan:
         | 
         | > that stuff in unmaintainable
         | 
         | Wrong. Version your functions and use something like liquibase
         | to apply migrations.
        
           | ilrwbwrkhv wrote:
           | And handwrite xml? No thanks. Again, if I wanted to do any of
           | this management myself, I wouldn't be using a PaaS.
        
           | koromak wrote:
           | Way less friendly than writing code IMO
        
         | xrd wrote:
         | I agree. I would love to see more articles on pocketbase. It's
         | phenomenal and ganigeorgiev is an animal about responding to
         | bugs and discussions. He's got to be a hybrid human and ChatGPT
         | robot.
        
           | ilrwbwrkhv wrote:
           | What's also really cool is that you can also just use
           | PocketBase as a Go library and just build your app around it
           | like any normal web framework, while still having a great UI
           | for quick prototyping. And when you need more custom
           | behaviour instead of database functions, you just write some
           | Go code while still compiling everything down to a single
           | binary that you can copy over.
        
         | datavirtue wrote:
         | "Local development is a massive pain"...that's enough to kill
         | it for me. No need to point out anything else.
        
       | PKop wrote:
       | Interesting statement here:
       | 
       | "We rewrote our data layer to treat the database as a simple
       | persistence layer rather than an application. We eliminated all
       | the triggers, stored procedures, and row-level security rules.
       | That logic lives in the application now."
       | 
       | Reminds me of the article and discussion here[0] over whether to
       | put logic in the database or not and to what degree.
       | 
       | [0] https://news.ycombinator.com/item?id=35643432 "Use Databases
       | Without Putting Domain Logic in Them"
        
         | bodecker wrote:
         | Also reminds me of this Martin Fowler post [0]:
         | 
         | "The situation becomes interesting when the vast majority of
         | your data sits in a single logical database. In this case you
         | have two primary issues to consider. One is the choice of
         | programming language: SQL versus your application language. The
         | other is where the code runs, SQL at the database, or in
         | memory.
         | 
         | SQL makes some things easy, but other things more difficult.
         | Some people find SQL easy to work with, others find it horribly
         | cryptic. The teams personal comfort is a big issue here. I
         | would suggest that if you go the route of putting a lot of
         | logic in SQL, don't expect to be portable - use all of your
         | vendors extensions and cheerfully bind yourself to their
         | technology. If you want portability keep logic out of SQL."
         | 
         | [0] https://martinfowler.com/articles/dblogic.html
        
       | dbmikus wrote:
       | Personally, I had a really easy time getting Supabase to work
       | locally. However, we use `dbmate` to manage our migrations
       | instead of built-in Supabase migrations.
       | 
       | Also curious to hear from others on this:
       | 
       | > After a bit of sleuthing, it ended up that Supabase was taking
       | a database backup that took the database fully offline every
       | night, at midnight.
       | 
       | This seems like a terrible design decision if true. Why not just
       | backup via physical or logical replication?
       | 
       | And totally hear the issues here with database resizing and
       | vacuuming and other operations. That stuff is a big pain when it
       | breaks.
        
         | crooked-v wrote:
         | Supabase daily backups just use pg_dump. If their database was
         | going offline, then something else was broken.
        
           | kiwicopple wrote:
           | (supabase ceo)
           | 
           | To give context, Val Town have a particularly write-heavy
           | setup, storing a lot of json strings. The nightly backups
           | were causing write-contention, even at their relatively small
           | size. We didn't detect errors because they were application-
           | level. We should have moved them to PITR as soon as they
           | mentioned it since the timing was so obviously coinciding
           | with backups. We're investigating moving everyone to PITR
           | (including the free tier). At the very least, we'll add more
           | control for backups - allowing users to change the
           | maintenance window, or possibly disabling backups completely
           | if they are managing it themselves.
        
             | aeyes wrote:
             | how does a backup cause write contention? are you backing
             | up to the same disk?
             | 
             | also why are backups using pg_dump? that's not a backup.
        
       | omeze wrote:
       | The local development & database migration story is Supabase's
       | biggest weakness. I hate having to do migrations live in prod.
       | The admin dashboard is just so much better than any alternative
       | Postgres tooling that it's been worth using despite that. Takes
       | care of the stuff I'd normally be sweating over when writing
       | migrations like nullable fields / FK constraints / JSON
       | formatting for default fields. Would be great if Supabase allowed
       | for a "speculative migration" in its UX where it spit out a file
       | you could use locally to test beforehand.
        
         | crooked-v wrote:
         | If you use the CLI, `supabase start` spins up a Docker instance
         | built from all your migration .sql files [1].
         | 
         | If anything, I think the admin dashboard encouraging directly
         | doing operations on the database is the biggest weakness of
         | Supabase. I would much prefer being able to lock it down to
         | purely CI-driven migrations.
         | 
         | [1]: https://supabase.com/docs/guides/getting-started/local-
         | devel...
        
         | kiwicopple wrote:
         | Please don't use the Dashboard to edit your database in
         | production. We're working on Preview Databases which will help
         | enforce this. For now this fits into our Shared Responsibility
         | Model:
         | 
         | https://supabase.com/docs/guides/platform/shared-responsibil...
         | 
         | You are responsible for a workflow that's suitable for your
         | application. Once you get into production, you should be using
         | Migrations for every database change. I have a more thorough
         | response here: https://news.ycombinator.com/item?id=36006018
        
       | aguynamedben wrote:
       | I love Steve Krouse!!!
        
       | kiwicopple wrote:
       | hey hn, supabase ceo here
       | 
       | the Val Town team were kind enough to share this article with me
       | before they released it. Perhaps you know from previous HN
       | threads that we take customer feedback very seriously. Hearing
       | feedback like this is hard. Clearly the team at Val Town wanted
       | Supabase to be great and we didn't meet their expectations. For
       | me personally, that hurts. A few quick comments
       | 
       | 1. Modifying the database in production: I've published a doc on
       | Maturity Models[0]. Hopefully this makes it clear that developers
       | should be using Migrations once their project is live (not using
       | the Dashboard to modify their database live). It also highlights
       | the options for managing dev/local environments. This is just a
       | start. We're building Preview Databases into the native workflow
       | so that developers don't need to think about this.
       | 
       | 2. Designing for Supabase: Our goal is to make all of Postgres
       | easy, not obligatory. I've added a paragraph[1] in the first page
       | in our Docs highlighting that it's not always a good idea to go
       | all-in on Postgres. We'll add examples to our docs with
       | "traditional" approaches like Node + Supabase, Rails + Supabase,
       | etc. There are a lot of companies using this approach already,
       | but our docs are overly focused on "the Supabase way" of doing
       | things. There shouldn't be a reason to switch from Supabase to
       | any other Postgres provider if you want "plain Postgres".
       | 
       | 3. That said, we also want to continue making "all of Postgres"
       | easy to use. We're committed to building an amazing CLI
       | experience. Like any tech, we're going to need a few iterations.
       | W're building tooling for debugging and observability. We have
       | index advisors coming[2]. We recently added Open Telemetry to
       | Logflare[3] and added logging for local development[4]. We're
       | making platform usage incredibly clear[5]. We aim to make your
       | database indestructible - we care about resilience as much as
       | experience and we'll make sure we highlight that in future
       | product announcements.
       | 
       | I'll finish with something that I think we did well: migrating
       | away from Supabase was easy for Val Town, because it's just
       | Postgres. This is one of our core principles, "everything is
       | portable" (https://supabase.com/docs/guides/getting-
       | started/architectur...). Portability forces us compete on
       | experience. We aim to be the best Postgres hosting service in the
       | world, and we'll continue to focus on that goal even if we're not
       | there yet.
       | 
       | [0] Maturity models:
       | https://supabase.com/docs/guides/platform/maturity-model
       | 
       | [1] Choose your comfort level:
       | https://supabase.com/docs/guides/getting-started/architectur...
       | 
       | [2] Index advisor: https://database.dev/olirice/index_advisor
       | 
       | [3] Open Telemetry:
       | https://github.com/Logflare/logflare/pull/1466
       | 
       | [4] Local logging: https://supabase.com/blog/supabase-logs-self-
       | hosted
       | 
       | [5] Usage:
       | https://twitter.com/kiwicopple/status/1658683758718124032?s=...
        
         | crooked-v wrote:
         | I feel like the issue with the Supabase dashboard and database
         | modification is more one of your general approach. You put
         | editing stuff all right up front when at best it should just be
         | an emergency hatch, and the only place to find info on
         | migrations is by going and looking around in the docs.
        
           | kiwicopple wrote:
           | yes, I agree. We're working on ways to make the Migration
           | system more prominent in the Dashboard. Preview Databases
           | will help with this too.
           | 
           | > just be an emergency hatch
           | 
           | I would go as far as saying that migrations should still be
           | used beyond the initial development. The Maturity Models
           | linked above include 4 stages: Prototyping, Collaborating,
           | Production, Enterprise. After "Prototyping", everything
           | should be Migrations.
           | 
           | The exception is that you can use the Dashboard for local
           | development. When you run "supabase start", you can access
           | the Dashboard to edit your local database. From there you can
           | run "supabase db diff" to convert your changes into a
           | migration.
        
         | primitivesuave wrote:
         | Let me just say that (for me) Supabase is one of the most
         | exciting startups of the past couple years and I'm sure these
         | issues will get ironed out eventually. I believe in your
         | overall mission and am inspired by how much progress you all
         | have made in just three years.
        
         | MuffinFlavored wrote:
         | [flagged]
        
           | robbiep wrote:
           | I cannot be the only person here (are there any people left?)
           | that wishes that the comments section does not evolve to LLM
           | summaries of articles.
        
             | MuffinFlavored wrote:
             | For me it's the opposite. You've got the CEO speaking
             | corporate platitudes trying to defend himself in the
             | comments by removing focus on the actual issues at hand.
             | LLM helped summarize the issues at hand.
        
         | tr3ntg wrote:
         | Appreciate this well-thought out response. As someone who has
         | built several proof-of-concepts on Supabase (but never going
         | far enough to test its limits), articles by Val Town here and
         | responses like yours all work towards my analysis of the
         | platform for future projects.
         | 
         | It's funny that threads like these bring up comments like "Well
         | I use XYZ and it solves all of my problems." As if a one-time
         | mention of a new PaaS is enough to bank on it for future
         | projects. Although I can't lie - I do bookmark every PaaS that
         | I see mentioned on HN.
         | 
         | Regardless, I'd much rather put my faith in a platform like SB
         | that has been battle-tested in public, even if it doesn't work
         | out perfectly every time.
         | 
         | Always glad to see you and the team showing up for the
         | discussions and improving SB.
        
           | refulgentis wrote:
           | +1, paradoxically, I'm even more likely to use supabase after
           | this. Really thoughtful
        
             | solarkraft wrote:
             | Not paradoxical at all. They're clearly interested in
             | competing fairly instead of locking you in. That's a big
             | advantage. They're also critically evaluating their
             | approach. Exactly what I as a customer would want!
        
       | pbreit wrote:
       | The options for spinning up CRUD apps (ie, 95% of projects) are
       | still quite miserable.
        
         | tr3ntg wrote:
         | I'd say Supabase is great at spinning up CRUD apps. If
         | anything, this article could be summarized as "Because Val Town
         | is much more than a CRUD app, they had a harder time with
         | Supabase than the average."
        
         | rco8786 wrote:
         | Assuming you're talking about Supabase, I kind of disagree.
         | 
         | There's an initial learning curve with the row level security
         | stuff, but once you get a good grasp of it and come up with a
         | few patterns that suit your needs it's insanely fast to develop
         | on. You're trading the time it takes to build and manage an api
         | for the time it takes to setup RLS.
        
         | reducesuffering wrote:
         | https://github.com/t3-oss/create-t3-app hosted on Vercel +
         | Planetscale / Railway DB is very easy, I would be surprised if
         | you were miserable doing that.
         | 
         | Mostly that solves setting up Auth and Prisma SQL ORM to your
         | DB, but Next.js App directory with the Prisma setup (2 files /
         | 50 LOC) done is even smoother.
        
       | ahachete wrote:
       | A mid way could be self-hosting Supabase, whether you use more or
       | less Supabase features.
       | 
       | I know self-hosting might be challenging, specially getting a
       | production-ready Postgres backend for it.
       | 
       | That's why at StackGres we have built a Runbook [1] and companion
       | blog post [2] to help you run Supabase on Kubernetes. All
       | required components are fully open source, so you are more than
       | welcome to try it and give feedback if you are looking into this
       | alternative.
       | 
       | [1]: https://stackgres.io/doc/latest/runbooks/supabase-stackgres/
       | 
       | [2]: https://stackgres.io/blog/running-supabase-on-top-of-
       | stackgr...
       | 
       | update: edit
        
       | armatav wrote:
       | It definitely needs DB branching.
        
         | Ken_At_EM wrote:
         | Why is this comment getting downvoted?
        
       | kdrag0n wrote:
       | Honestly, I want to like Supabase but a lot of this resonates
       | with me even for a fairly small project. I also ended up with 3
       | user tables due to RLS limitations: auth users, public user
       | profile info, and private user info (e.g. Stripe customer IDs).
       | PostgREST's limitations also had me going back to an API server
       | architecture because I definitely didn't want to write logic in
       | database functions.
       | 
       | The only reason I haven't migrated yet is because I'd have to
       | rewrite the data layer to use Prisma/Drizzle instead of
       | Supabase's PostgREST client, and considering that this is a side
       | project, the problems aren't quite big enough to justify that.
        
       | bodecker wrote:
       | (Significantly edited after discussion)
       | 
       | I also had a tough time working w/ an app someone else built on
       | Supabase. We kept bumping up against what felt like "I know
       | feature X exists in postgres, but it's 'coming soon' in
       | Supabase." IIRC the blocker was specific to the trigger/edge
       | function behavior.
       | 
       | However after reflecting more, I don't remember enough to make a
       | detailed case. Perhaps the issue was with our use of the product.
        
         | kiwicopple wrote:
         | (supabase ceo)
         | 
         | > _" I know feature X exists in postgres, but it's 'coming
         | soon' in Supabase."_
         | 
         | There is no feature that exists in postgres that doesn't
         | already exist in Supabase. In case it's not clear, supabase is
         | just Postgres. We build extensions, we host it for you, and we
         | build tooling around the database. Our Dashboard is one of
         | those tools, but there is always an escape hatch - you can use
         | it like any other postgres database, with all the existing
         | tooling you're most comfortable with.
        
           | bodecker wrote:
           | Thanks for the response. I do recall hitting some product
           | limitations (a webhooks "beta" that we tried to use but hit a
           | blocker). Reflecting more, I don't recall the supporting
           | details specifically enough though. Edited original post and
           | apologies for the added noise.
        
       | jackconsidine wrote:
       | Nice read. I run 5-6 projects on Supabase currently. I have also
       | run into the local development / migration obstacles. It's
       | otherwise been pretty great for our needs
        
       | Ken_At_EM wrote:
       | This echos my experience with Supabase exactly. We migrated to a
       | similar solution for the same reasons.
        
       | gajus wrote:
       | Curious why have you decided for Drizzle over Kysely.
       | 
       | I was recently exploring the space, and Kysely came on top as a
       | framework with broader adoption.
       | 
       | https://npmtrends.com/drizzle-orm-vs-kysely
        
         | reducesuffering wrote:
         | You should consider the similar reasons why you chose Kysely
         | over Prisma. Prisma has far broader adoption.
         | https://npmtrends.com/drizzle-orm-vs-kysely-vs-prisma
        
         | tmcw wrote:
         | Sure! I think Kysely is great too, but went with Drizzle for a
         | few different reasons:
         | 
         | Kysely is a little more established than Drizzle, which I think
         | is one of the major reason why it has broader adoption. My bet
         | is that Drizzle is moving really fast, gaining adoption, and
         | might catch up at some point. It's also - in terms of
         | performance - super fast, and nicely layers on top of fast
         | database clients.
         | 
         | Some of the differences that I liked about Drizzle were the
         | extra database drivers being core and developed as part of the
         | main project. It supports prepared statements, which is
         | awesome. The Drizzle API also covers an impressive percentage
         | of what you can do in raw SQL, and when there's something
         | missing, like a special column type, it's been pretty
         | straightforward to add.
         | 
         | I prefer the way that it lets us write parts of queries, and
         | compose them - like you import expressions like "and" and "eq"
         | and you can write and(eq(users.id, 'x'), eq(users.name, 'Tom'))
         | and you can actually stringify that to the SQL it generates. Or
         | you can do a custom bit of SQL and use the names of table
         | columns in that, like `COUNT(${users.name})`. I can't say
         | scientifically that this is superior, and it's almost a little
         | weird, but I've really found it a nice way to compose and debug
         | queries.
         | 
         | That said, Kysely is also a great project and it'd be possible
         | to build great products with it, too. I just found the
         | momentum, API, and philosophy of Drizzle to be pretty
         | compelling.
        
         | Akkuma wrote:
         | You can actually integrate both as well if you really want to
         | leverage Drizzle for schema building and migrations:
         | https://github.com/drizzle-team/drizzle-orm/tree/main/drizzl...
        
       | AlchemistCamp wrote:
       | Render.com is pricey, but very underrated.
       | 
       | If your business model isn't broken by their pricing model, I
       | really don't know an easier/more time-efficient choice.
        
       | exac wrote:
       | > Local development was tough
       | 
       | > Unfortunately, we just couldn't get it to work
       | 
       | Every time I read one of these migration stories, I find myself
       | waiting with baited breath for the part the team couldn't
       | achieve. After finding it, the remainder of the story becomes
       | difficult to read.
       | 
       | It isn't necessarily the team's fault, the developer experience
       | clearly has room for improvement. Props to Val Town for being so
       | honest, it is difficult to do.
        
         | bitdivision wrote:
         | Can you elaborate on why it becomes difficult to read? Was
         | there something obvious they missed?
        
       | tonerow wrote:
       | Supabase is also great for auth. Did you reimplement auth
       | yourself or switch to another auth service or framework?
        
         | doodlesdev wrote:
         | PSA: Supabase Auth is based on their fork [0] of Netlify's
         | Gotrue [1]. If you are migrating out of Supabase completely you
         | can just drop in Gotrue for authentication.
         | 
         | [0]: https://github.com/supabase/gotrue
         | 
         | [1]: https://github.com/netlify/gotrue
        
         | tmcw wrote:
         | We switched to Clerk.dev. Thankfully we had only supported
         | magic link auth, so there wasn't much information to migrate
         | over. Clerk has been pretty good - they have a great Remix
         | integration and solid admin experience.
        
           | doodlesdev wrote:
           | Any specific reason to go with it instead of alternatives?
        
       | t1mmen wrote:
       | I hadn't touched SQL for almost 7 years, but dipped my toes back
       | in to build a PoC using Supabase. Despite some initial pains
       | around RLS, I've grown to love it.
       | 
       | Sure, Supabase has some awkward quirks and issue, and author has
       | some good points. But when it works like it should, it's pretty
       | awesome. I think of it as a powerful wrapper around solid
       | services that make for great DX, in _most_ cases.
       | 
       | If Supabase could provide a great way to handle migrations and
       | RLS, that'd be the biggest improvement to most people's
       | workflows, I'd bet.
       | 
       | I really wish I could just define my scheme, tables, functions,
       | triggers, policies etc as typescript, then have migrations
       | generated from that.
        
       | motoxpro wrote:
       | Great read. Similar to my experience with Hasura. Migrations
       | we're better there but the row level security was a nightmare.
       | Went to just a custom node backend with prisma and it's a dream.
       | No more writing tons of json rules and multiple views just to not
       | query the email field.
       | 
       | Seems like these types of services are good for basic large scale
       | crud applications, probably why you have Hasura pivoting to
       | enterprise.
       | 
       | The quote at the end of going back the future is exactly how I
       | felt. Will never use a Hasura/Supabase/etc again. Just makes
       | things more difficult.
        
         | danielrhodes wrote:
         | Had a similar experience with Hasura. They have done some
         | amazing things leveraging Postgres and GraphQL. But there were
         | just too many things that got really questionable. Things like
         | migrations becoming inconsistent with metadata, schema lock in,
         | poor ability to do rate limiting, having to use stored
         | procedures for everything, weird SQL that had performance
         | issues, unexplained row level locking, and so on. Local
         | development was a total mess.
         | 
         | Ultimately we were making architectural decisions to please
         | Hasura, not because it was in the best interests of what or how
         | we were building.
        
       | simonw wrote:
       | The documentation section here applies to so many products I've
       | battled in the past.
       | 
       | > The command supabase db remote commit is documented as "Commit
       | Remote Changes As A New Migration". The command supabase
       | functions new is documented as "Create A New Function Locally."
       | The documentation page is beautiful, but the words in it just
       | aren't finished.
       | 
       | Great documentation is such a force multiplier for a product.
       | It's so worthwhile investing in this.
       | 
       | Don't make your most dedicated users (the ones who get as far as
       | consulting your documentation) guess how to use your thing!
        
         | kiwicopple wrote:
         | this is very fair criticism. We hired a Head of Docs in March.
         | I hope the improvements are evident since he joined, both in
         | content and in usability.
         | 
         | We have a long way to go, but we're working on it.
        
           | pluto_modadic wrote:
           | yeah, most golang/rust/API documentation in products seems to
           | think that "the function name is documentation", which.... no
           | it's not. that's a tooltip in an IDE, not a docs website.
        
       ___________________________________________________________________
       (page generated 2023-05-19 23:00 UTC)