Here's how GITHUB.COM makes money* and how much!

*Please read our disclaimer before using our estimates.
Loading...

GITHUB . COM {}

Detected CMS Systems:

  1. Analyzed Page
  2. Matching Content Categories
  3. CMS
  4. Monthly Traffic Estimate
  5. How Does Github.com Make Money
  6. How Much Does Github.com Make
  7. Wordpress Themes And Plugins
  8. Keywords
  9. Topics
  10. Payment Methods
  11. Questions
  12. Schema
  13. External Links
  14. Analytics And Tracking
  15. Libraries
  16. Hosting Providers

We are analyzing https://github.com/wicg/turtledove/issues/909.

Title:
Protected Audience AB testing Β· Issue #909 Β· WICG/turtledove
Description:
Why do we need A/B tests? A/B testing is a key feature for experimentation in order to increase performance of Protected audience It allows us to measure the impact of technical changes We must be able to measure long term effects in a c...
Website Age:
17 years and 8 months (reg. 2007-10-09).

Matching Content Categories {πŸ“š}

  • Video & Online Content
  • Social Networks
  • Graphic Design

Content Management System {πŸ“}

What CMS is github.com built with?


Github.com utilizes WORDPRESS.

Traffic Estimate {πŸ“ˆ}

What is the average monthly size of github.com audience?

πŸš€πŸŒ  Tremendous Traffic: 10M - 20M visitors per month


Based on our best estimate, this website will receive around 10,000,019 visitors per month in the current month.
However, some sources were not loaded, we suggest to reload the page to get complete results.

check SE Ranking
check Ahrefs
check Similarweb
check Ubersuggest
check Semrush

How Does Github.com Make Money? {πŸ’Έ}


Subscription Packages {πŸ’³}

We've located a dedicated page on github.com that might include details about subscription plans or recurring payments. We identified it based on the word pricing in one of its internal links. Below, you'll find additional estimates for its monthly recurring revenues.

How Much Does Github.com Make? {πŸ’°}


Subscription Packages {πŸ’³}

Prices on github.com are in US Dollars ($). They range from $4.00/month to $21.00/month.
We estimate that the site has approximately 4,989,889 paying customers.
The estimated monthly recurring revenue (MRR) is $20,957,532.
The estimated annual recurring revenues (ARR) are $251,490,385.

Wordpress Themes and Plugins {🎨}

What WordPress theme does this site use?

It is strange but we were not able to detect any theme on the page.

What WordPress plugins does this website use?

It is strange but we were not able to detect any plugins on the page.

Keywords {πŸ”}

user, population, users, tests, split, measure, test, bid, identifier, behavior, publisher, advertiser, fhoering, strategy, bits, time, bidding, testing, issue, buying, auction, chrome, urls, protected, audience, call, case, interest, party, scenarios, commented, long, effects, multiple, single, experimentgroupid, campaigns, work, proposal, entropy, low, prevent, sign, lets, complex, websites, propose, based, browser, group,

Topics {βœ’οΈ}

/wicg/turtledove/blob/main/meetings/2023-11-29-fledge-call-minutes-slides-ab-testing /wicg/turtledove/blob/main/fledge protected audience worklet protected audience privacy sandbox apis remysaissy edits contributor 1st-party identifier activate ab tests long term effects 1st party users leak additional information ab testing type projects projects milestone global user population important ab tests technical ab tests complciated buying strategies added nike shoes low entropy userexperimentgroupid interest group split support coordinated experiments full user identifier buyers' trusted servers ab test population contextual call allowing advertiser web page ab test populations complex user journey browser javascript context real world scenarios handle multiple experiments extending fledge upper funnel campaigns comment metadata assignees unique split inside cookie sync scenario ad techs combine shared storage api wicg device-auction optionally reduce auction latency large population drift unique user identifier ab tests users multiple times segmenting users based propose buying strategy bid strategy produced shared-storage proposal

Payment Methods {πŸ“Š}

  • Braintree

Questions {❓}

  • Already have an account?
  • Why do we need A/B tests?

Schema {πŸ—ΊοΈ}

DiscussionForumPosting:
      context:https://schema.org
      headline:Protected Audience AB testing
      articleBody:### Why do we need A/B tests? - A/B testing is a key feature for experimentation in order to increase performance of Protected audience - It allows us to measure the impact of technical changes - We must be able to measure long term effects in a consistent way To give an example of what we mean by long term effects let’s look at a complex user journey and assume that we split users per publisher website (because we have access to the hostname in PA API), on some websites we propose buying strategy A and on some publisher websites we propose buying strategy B, and that we can measure conversions like sales for each ad display. In retargeting, we show a banner to users multiple times before they buy. For example, a user has added Nike shoes to his basket but has not converted, we will remind him of the product, through ads on several publishers. When he converts, the sale will be attributed to the publisher on which was shown the last ad and not to whatever happened before that. In other terms, it is impossible to measure the effect of a buying strategy A versus B since we will not have a single identifier across sites. ### Existing mechanism with ExperimentGroupId https://github.com/WICG/turtledove/blob/main/FLEDGE.md#21-initiating-an-on-device-auction > Optionally,perBuyerExperimentGroupIds can be specified to support coordinated experiments with buyers' trusted servers. If specified, this must also be an integer between zero and 65535 (16 bits).​ The expected workflow has been described here: [Extending FLEDGE to support coordinated experiments by abrik0131 Β· Pull Request #266 Β· WICG/turtledove](https://github.com/WICG/turtledove/pull/266) Our understanding is that this translates to: ![exp_group_id_workflow](https://github.com/WICG/turtledove/assets/13346472/49111adc-e42d-449b-808d-5b4b9601fcb9) **Pros:** - `buyerExperimentGroupId` can be dynamically set by buyer as part of the contextual call allowing any split (see comment below, it already might not apply anymore as async calls should be used to reduce auction latency) - 16 bits (65535 different values) is big enough to handle multiple experiments at a time - this AB testing seems interesting to measure technical changes - analysis can directly be done via ExperimentGroupId as it is propagated to `reportWin` **Cons:** - We cannot measure long term effects (as explained above) as the split must be done based on contextual signals for example by publisher domain, so basically we can only measure something that is directly attributed to this ad display, a same Chrome browser might see changes on the very same ad campaign in population A & B - One alternative, that would allow measuring long terms effects, would be segmenting users based on geolocation, the challenge here would be to have populations of same size and same user behavior, so it will not be universally applicable but depend on the use case - It might not be applicable to auctions where[ signals are resolved in an asynchronous way](https://github.com/WICG/turtledove/blob/main/FLEDGE.md#211-providing-signals-asynchronously,) as in this case the contextual call and the key/value server call run in parallel ### Splitting per interest group and 1st party user id Doing a per interest group split seems appealing because for interest groups that are created on one advertiser website one could apply the same changes to the same campaigns to all 1st party users of this advertiser. This would mainly work for single advertiser AB tests where we target users that already went to advertiser web page. It would work less well for more complex scenarios on all our traffic where we modify the behavior of multiple campaigns on multiple websites and in this case the same drawback as above, the very same user could see behavior changes in population A and B. As we would split users during tagging phase we cannot guarantee that we really see those users again for a bidding opportunity. So we cannot guarantee an even split as for bidding we might only see n% of users of population A for bidding and a different amount for population B (some more explanation here[ Approach 2: Intent-to-Treat](https://www.thinkwithgoogle.com/intl/en-gb/marketing-strategies/monetisation-strategies/a-revolution-in-measuring-ad-effectiveness/)) ![user_id_split](https://github.com/WICG/turtledove/assets/13346472/271ece1e-c4dc-4d82-80a9-8c88aa07548c) **Pros:** - Could handle single advertiser scenarios where we consistently know the user with its 1st party id **Cons:** - For reporting we need to log the AB test population of computeBid to reportWin - it could be done by encoding the AB test population inside the renderUrl at the expense of k-anonymity, handling 5 AB tests in a independent way would mean 2^5 renderUrls - alternatively aggregated reporting like Aggregated ARA could be used - We cannot handle a large number of AB tests in parallel, in any case less than with ExperimentGroupId - Leakage as the same user will change for different advertiser websites which will be an issue when the behavior is changed on retargeting campaigns for multiple advertisers or when more upper funnel campaigns are used - Additional bias because we split user based on tagging behavior and not when we get a bid opportunity ### Using shared storage for AB testing The [shared-storage](https://github.com/WICG/shared-storage) proposal already has a section on how to activate [AB tests](https://github.com/WICG/shared-storage#simple-example-consistent-ab-experiments-across-sites). The general idea is to create a unique user identifier (seed) for the Chrome browser with generateSeed, then call the window.sharedStorage.selectURL operation which takes a list of urls, hashes the user identifier to an index in this list and then returns the url for that user. The AB test population would be encoded in the url and as the number of urls is limited to 8 urls it would allow 3 bits of entropy for the user population. As different urls can be used for each call and would leak 3 bit all the time some mechanisms are in place to limit the budget per 24h per distinct number of urls (see https://github.com/WICG/shared-storage#budgeting). As of now shared storage can only be called from a browser Javascript context and not from a Protected Audience worklet. This means the urls selection can only happen during rendering and not during bidding and therefore shared storage can only be used for pure creative AB tests and not Protected Audience bidding AB tests. So we still need a dedicated proposal to activate Protected Audience AB tests. ### Proposal - Inject a low entropy global user population into `computeBid` For real world scenarios a global user population would still be needed for AB tests that need to measure complex user behaviors. As injecting any form of user identifier would leak additional information we propose a low entropy user identifier and some mitigations to prevent using or combining this into a full user identifier. Chrome could cluster all users into a low entropy `UserExperimentGroupId` something like 3 bits. This identifier should be randomly drawn for each ad tech and not unique to all actors to prevent that our measurement cannot be influenced by the testing of other ad techs. As attribution is measured for each impression or click we would like this identifier to be stable for some time but it should be also shifted on a certain amount of users to prevent a large population drift over time. Long running AB tests will influence users and then user behavior will change over time introducing some bias. The usual way to solve this is restarting an AB test which cannot be done here for such a limited amount of buckets. So one idea might be to constantly rotate the population. Constantly rotating the population would be also useful to limit the effectiveness of a coordinated attack among Ad Techs to identify a user. If 1% of users get reassigned to population each day it would mean that after 14 days 14% of user might have shifted population. If the labels are rotated every X weeks, it adds further burden to those trying to collude and update their 1st-party ID β†’ global ID mappings This new population id would be injected only into the `generateBid` function and also the trusted key/value server (to mirror current ExperimentGroupID behavior and because many of our computations are still server side, it is secure by design as it will run in a TEE without side effects). The identifiers could only get out of the of `generateBid` function via existing mechanisms and that already present privacy/utility trade offs, for example: - by adding more renderUrls at the expense of k-anonymity - by reserving some bits of modelingSignals at the expense of handling less advertiserSignals - by using aggregated reporting at the expense of DP noise and bucketization If we encode the 3 bits into renderUrl this proposal seems very aligned with the proposal on[ shared-storage](https://github.com/WICG/shared-storage) to allow 8 URLs (= 3 bits of entropy) for `selectURL` to[ activate creative AB testing](https://github.com/WICG/shared-storage#simple-example-consistent-ab-experiments-across-sites) (post bidding). In our case as Chrome would control the seed and the `generateSeed` function can not be used we would not leak more than 3 bit. So introducing any form of budget capping seems not necessary. To prevent some cookie sync scenario where ad techs combine this new id into a full user identifier Chrome could add an explicit statement to the[ attestation](https://github.com/privacysandbox/attestation/blob/main/how-to-enroll.md) to prevent ad techs sharing this id. By design as we have few AB test populations we could only run a limited number of AB tests at the same time but we could reserve this for important AB tests and use the ExperimentGroupId mechanism more for technical AB tests.
      author:
         url:https://github.com/fhoering
         type:Person
         name:fhoering
      datePublished:2023-11-16T08:27:07.000Z
      interactionStatistic:
         type:InteractionCounter
         interactionType:https://schema.org/CommentAction
         userInteractionCount:6
      url:https://github.com/909/turtledove/issues/909
      context:https://schema.org
      headline:Protected Audience AB testing
      articleBody:### Why do we need A/B tests? - A/B testing is a key feature for experimentation in order to increase performance of Protected audience - It allows us to measure the impact of technical changes - We must be able to measure long term effects in a consistent way To give an example of what we mean by long term effects let’s look at a complex user journey and assume that we split users per publisher website (because we have access to the hostname in PA API), on some websites we propose buying strategy A and on some publisher websites we propose buying strategy B, and that we can measure conversions like sales for each ad display. In retargeting, we show a banner to users multiple times before they buy. For example, a user has added Nike shoes to his basket but has not converted, we will remind him of the product, through ads on several publishers. When he converts, the sale will be attributed to the publisher on which was shown the last ad and not to whatever happened before that. In other terms, it is impossible to measure the effect of a buying strategy A versus B since we will not have a single identifier across sites. ### Existing mechanism with ExperimentGroupId https://github.com/WICG/turtledove/blob/main/FLEDGE.md#21-initiating-an-on-device-auction > Optionally,perBuyerExperimentGroupIds can be specified to support coordinated experiments with buyers' trusted servers. If specified, this must also be an integer between zero and 65535 (16 bits).​ The expected workflow has been described here: [Extending FLEDGE to support coordinated experiments by abrik0131 Β· Pull Request #266 Β· WICG/turtledove](https://github.com/WICG/turtledove/pull/266) Our understanding is that this translates to: ![exp_group_id_workflow](https://github.com/WICG/turtledove/assets/13346472/49111adc-e42d-449b-808d-5b4b9601fcb9) **Pros:** - `buyerExperimentGroupId` can be dynamically set by buyer as part of the contextual call allowing any split (see comment below, it already might not apply anymore as async calls should be used to reduce auction latency) - 16 bits (65535 different values) is big enough to handle multiple experiments at a time - this AB testing seems interesting to measure technical changes - analysis can directly be done via ExperimentGroupId as it is propagated to `reportWin` **Cons:** - We cannot measure long term effects (as explained above) as the split must be done based on contextual signals for example by publisher domain, so basically we can only measure something that is directly attributed to this ad display, a same Chrome browser might see changes on the very same ad campaign in population A & B - One alternative, that would allow measuring long terms effects, would be segmenting users based on geolocation, the challenge here would be to have populations of same size and same user behavior, so it will not be universally applicable but depend on the use case - It might not be applicable to auctions where[ signals are resolved in an asynchronous way](https://github.com/WICG/turtledove/blob/main/FLEDGE.md#211-providing-signals-asynchronously,) as in this case the contextual call and the key/value server call run in parallel ### Splitting per interest group and 1st party user id Doing a per interest group split seems appealing because for interest groups that are created on one advertiser website one could apply the same changes to the same campaigns to all 1st party users of this advertiser. This would mainly work for single advertiser AB tests where we target users that already went to advertiser web page. It would work less well for more complex scenarios on all our traffic where we modify the behavior of multiple campaigns on multiple websites and in this case the same drawback as above, the very same user could see behavior changes in population A and B. As we would split users during tagging phase we cannot guarantee that we really see those users again for a bidding opportunity. So we cannot guarantee an even split as for bidding we might only see n% of users of population A for bidding and a different amount for population B (some more explanation here[ Approach 2: Intent-to-Treat](https://www.thinkwithgoogle.com/intl/en-gb/marketing-strategies/monetisation-strategies/a-revolution-in-measuring-ad-effectiveness/)) ![user_id_split](https://github.com/WICG/turtledove/assets/13346472/271ece1e-c4dc-4d82-80a9-8c88aa07548c) **Pros:** - Could handle single advertiser scenarios where we consistently know the user with its 1st party id **Cons:** - For reporting we need to log the AB test population of computeBid to reportWin - it could be done by encoding the AB test population inside the renderUrl at the expense of k-anonymity, handling 5 AB tests in a independent way would mean 2^5 renderUrls - alternatively aggregated reporting like Aggregated ARA could be used - We cannot handle a large number of AB tests in parallel, in any case less than with ExperimentGroupId - Leakage as the same user will change for different advertiser websites which will be an issue when the behavior is changed on retargeting campaigns for multiple advertisers or when more upper funnel campaigns are used - Additional bias because we split user based on tagging behavior and not when we get a bid opportunity ### Using shared storage for AB testing The [shared-storage](https://github.com/WICG/shared-storage) proposal already has a section on how to activate [AB tests](https://github.com/WICG/shared-storage#simple-example-consistent-ab-experiments-across-sites). The general idea is to create a unique user identifier (seed) for the Chrome browser with generateSeed, then call the window.sharedStorage.selectURL operation which takes a list of urls, hashes the user identifier to an index in this list and then returns the url for that user. The AB test population would be encoded in the url and as the number of urls is limited to 8 urls it would allow 3 bits of entropy for the user population. As different urls can be used for each call and would leak 3 bit all the time some mechanisms are in place to limit the budget per 24h per distinct number of urls (see https://github.com/WICG/shared-storage#budgeting). As of now shared storage can only be called from a browser Javascript context and not from a Protected Audience worklet. This means the urls selection can only happen during rendering and not during bidding and therefore shared storage can only be used for pure creative AB tests and not Protected Audience bidding AB tests. So we still need a dedicated proposal to activate Protected Audience AB tests. ### Proposal - Inject a low entropy global user population into `computeBid` For real world scenarios a global user population would still be needed for AB tests that need to measure complex user behaviors. As injecting any form of user identifier would leak additional information we propose a low entropy user identifier and some mitigations to prevent using or combining this into a full user identifier. Chrome could cluster all users into a low entropy `UserExperimentGroupId` something like 3 bits. This identifier should be randomly drawn for each ad tech and not unique to all actors to prevent that our measurement cannot be influenced by the testing of other ad techs. As attribution is measured for each impression or click we would like this identifier to be stable for some time but it should be also shifted on a certain amount of users to prevent a large population drift over time. Long running AB tests will influence users and then user behavior will change over time introducing some bias. The usual way to solve this is restarting an AB test which cannot be done here for such a limited amount of buckets. So one idea might be to constantly rotate the population. Constantly rotating the population would be also useful to limit the effectiveness of a coordinated attack among Ad Techs to identify a user. If 1% of users get reassigned to population each day it would mean that after 14 days 14% of user might have shifted population. If the labels are rotated every X weeks, it adds further burden to those trying to collude and update their 1st-party ID β†’ global ID mappings This new population id would be injected only into the `generateBid` function and also the trusted key/value server (to mirror current ExperimentGroupID behavior and because many of our computations are still server side, it is secure by design as it will run in a TEE without side effects). The identifiers could only get out of the of `generateBid` function via existing mechanisms and that already present privacy/utility trade offs, for example: - by adding more renderUrls at the expense of k-anonymity - by reserving some bits of modelingSignals at the expense of handling less advertiserSignals - by using aggregated reporting at the expense of DP noise and bucketization If we encode the 3 bits into renderUrl this proposal seems very aligned with the proposal on[ shared-storage](https://github.com/WICG/shared-storage) to allow 8 URLs (= 3 bits of entropy) for `selectURL` to[ activate creative AB testing](https://github.com/WICG/shared-storage#simple-example-consistent-ab-experiments-across-sites) (post bidding). In our case as Chrome would control the seed and the `generateSeed` function can not be used we would not leak more than 3 bit. So introducing any form of budget capping seems not necessary. To prevent some cookie sync scenario where ad techs combine this new id into a full user identifier Chrome could add an explicit statement to the[ attestation](https://github.com/privacysandbox/attestation/blob/main/how-to-enroll.md) to prevent ad techs sharing this id. By design as we have few AB test populations we could only run a limited number of AB tests at the same time but we could reserve this for important AB tests and use the ExperimentGroupId mechanism more for technical AB tests.
      author:
         url:https://github.com/fhoering
         type:Person
         name:fhoering
      datePublished:2023-11-16T08:27:07.000Z
      interactionStatistic:
         type:InteractionCounter
         interactionType:https://schema.org/CommentAction
         userInteractionCount:6
      url:https://github.com/909/turtledove/issues/909
Person:
      url:https://github.com/fhoering
      name:fhoering
      url:https://github.com/fhoering
      name:fhoering
InteractionCounter:
      interactionType:https://schema.org/CommentAction
      userInteractionCount:6
      interactionType:https://schema.org/CommentAction
      userInteractionCount:6

External Links {πŸ”—}(5)

Analytics and Tracking {πŸ“Š}

  • Site Verification - Google

Libraries {πŸ“š}

  • Clipboard.js
  • D3.js
  • Lodash

Emails and Hosting {βœ‰οΈ}

Mail Servers:

  • aspmx.l.google.com
  • alt1.aspmx.l.google.com
  • alt2.aspmx.l.google.com
  • alt3.aspmx.l.google.com
  • alt4.aspmx.l.google.com

Name Servers:

  • dns1.p08.nsone.net
  • dns2.p08.nsone.net
  • dns3.p08.nsone.net
  • dns4.p08.nsone.net
  • ns-1283.awsdns-32.org
  • ns-1707.awsdns-21.co.uk
  • ns-421.awsdns-52.com
  • ns-520.awsdns-01.net
9.77s.