Semcasting provides data that delivers multi-channel contact information for individuals in the United States. Semcasting's database was created with 100% FCRA compliant and non-regulated data. The data sources include publicly available property data, census data, federal government survey data and Federal Reserve reporting. The database includes large samples of households nationwide with known incomes, assets, discretionary spending and automobile ownership, along with local tax rates and cost-of-living figures.
Semcasting's variables are part of what makes the Semcasting modeling process so powerful for direct marketing campaigns. Our variables set us apart from our competitors and are influential in creating the strongest predictive models and targeted lists for our client's campaigns – no matter the industry or issue they are facing.
For more information about Semcasting's data universe, please go to the Resources page where you can download a PDFs about the individual variables available for targeting.
Predictive models have been used for customer acquisition, cross selling and collections programs for years. It is widely accepted that lists built from predictive models outperform lists based on selection criteria by double digit percentages. However, the development of a model typically involves expensive tools, data and high levels of statistical expertise. Semcasting Modeler changes all that by automating the predictive modeling process and turning what is often weeks of development into an hour of compute time.
Semcasting Modeler employs machine learning and automation to enhance the speed and efficiency of predictive modeling. Based on patented genetic algorithms, Semcasting Modeler software allows models to be created using hundreds of data variables rather than just a few. Data predictors are the product of a process where thousands of models are built simultaneously to determine which of the variables, and combination of variables, will offer the strongest contribution to the final model. Since the software uses a much broader set of data during the model building process, there is a greater likelihood that subtle predictors will be found. The process takes hours rather than days or weeks to complete, often producing models that measurably outperform traditional regression-based approaches.