Vulnerabilities and Threats in Distributed Systems

Conclusions Exciting area of research 20 years of research in Reliability can form a basis for vulnerability and threat studies in Security Need to quantify threats, risks, and potential impacts on distributed applications. Do not be terrorized and act scared Adapt and use resources to deal with different threat levels Government, industry, and the public are interested in progress in this research

ppt56 trang | Chia sẻ: vutrong32 | Ngày: 17/10/2018 | Lượt xem: 125 | Lượt tải: 0download
Bạn đang xem trước 20 trang tài liệu Vulnerabilities and Threats in Distributed Systems, để xem tài liệu hoàn chỉnh bạn click vào nút DOWNLOAD ở trên
Vulnerabilities and Threats in Distributed Systems*Prof. Bharat BhargavaDr. Leszek LilienDepartment of Computer Sciences and the Center for Education and Research in Information Assurance and Security (CERIAS )Purdue Universitywww.cs.purdue.edu/people/{bb, llilien}Presented byProf. Sanjay MadriaDepartment of Computer ScienceUniversity of Missouri-Rolla* Supported in part by NSF grants IIS-0209059 and IIS-02428403/23/041Prof. Bhargava thanks the organizers of the 1st International Conference on Distributed Computing & Internet Technology—ICDCIT 2004. In particular, he thanks: Prof. R. K. Shyamsunder Prof. Hrushikesha Mohanty Prof. R.K. Ghosh Prof. Vijay Kumar Prof. Sanjay MadriaHe thanks the attendees, and regrets that he could not be present.He came to Bhubaneswar in 2001 and enjoyed it tremendously. He was looking forward to coming again.He will be willing to communicate about this research. Potential exists for research collaboration. Please send mail to bb@cs.purdue.eduHe will very much welcome your visit to Purdue University.3/23/042From Vulnerabilities to LossesGrowing business losses due to vulnerabilities in distributed systems Identity theft in 2003 – expected loss of $220 bln worldwide ; 300%(!) annual growth rate [csoonline.com, 5/23/03]Computer virus attacks in 2003 – estimated loss of $55 bln worldwide [news.zdnet.com, 1/16/04]Vulnerabilities occur in:Hardware / Networks / Operating Systems / DB systems / ApplicationsLoss chainDormant vulnerabilities enable threats against systemsPotential threats can materialize as (actual) attacksSuccessful attacks result in security breachesSecurity breaches cause losses3/23/043Vulnerabilities and ThreatsVulnerabilities and threats start the loss chainBest to deal with them firstDeal with vulnerabilitiesGather in metabases and notification systems info on vulnerabilities and security incidents, then disseminate itExample vulnerability and incident metabasesCVE (Mitre), ICAT (NIST), OSVDB (osvdb.com)Example vulnerability notification systemsCERT (SEI-CMU), Cassandra (CERIAS-Purdue)Deal with threatsThreat assessment proceduresSpecialized risk analysis using e.g. vulnerability and incident infoThreat detection / threat avoidance / threat tolerance3/23/044OutlineVulnerabilitiesThreatsMechanisms to Reduce Vulnerabilities and Threats3.1. Applying Reliability and Fault Tolerance Principles to Security Research 3.2. Using Trust in Role-based Access Control 3.3. Privacy-preserving Data Dissemination 3.4. Fraud Countermeasure Mechanisms3/23/045Vulnerabilities - TopicsModels for VulnerabilitiesFraud VulnerabilitiesVulnerability Research Issues 3/23/046Models for Vulnerabilities (1)A vulnerability in security domain – like a fault in reliability domainA flaw or a weakness in system security procedures, design, implementation, or internal controlsCan be accidentally triggered or intentionally exploited, causing security breachesModeling vulnerabilitiesAnalyzing vulnerability featuresClassifying vulnerabilitiesBuilding vulnerability taxonomiesProviding formalized modelsSystem design should not let an adversary know vulnerabilities unknown to the system owner3/23/047Models for Vulnerabilities (2)Diverse models of vulnerabilities in the literatureIn various environmentsUnder varied assumptionsExamples followAnalysis of four common computer vulnerabilities [17] Identifies their characteristics, the policies violated by their exploitation, and the steps needed for their eradication in future software releasesVulnerability lifecycle model applied to three case studies [4] Shows how systems remains vulnerable long after security fixesVulnerability lifetime stages:appears, discovered, disclosed, corrected, publicized, disappears3/23/048Models for Vulnerabilities (3)Model-based analysis to identify configuration vulnerabilities [23] Formal specification of desired security propertiesAbstract model of the system that captures its security-related behaviorsVerification techniques to check whether the abstract model satisfies the security propertiesKinds of vulnerabilities [3]OperationalE.g. an unexpected broken linkage in a distributed databaseInformation-basedE.g. unauthorized access (secrecy/privacy), unauthorized modification (integrity), traffic analysis (inference problem), and Byzantine input3/23/049Models for Vulnerabilities (4)Not all vulnerabilities can be removed, some shouldn’t Because:Vulnerabilities create only a potential for attacksSome vulnerabilities cause no harm over entire system’s life cycleSome known vulnerabilities must be toleratedDue to economic or technological limitationsRemoval of some vulnerabilities may reduce usability E.g., removing vulnerabilities by adding passwords for each resource request lowers usabilitySome vulnerabilities are a side effect of a legitimate system featureE.g., the setuid UNIX command creates vulnerabilities [14]Need threat assessment to decide which vulnerabilities to remove first3/23/0410Fraud Vulnerabilities (1)Fraud: a deception deliberately practiced in order to secure unfair or unlawful gain [2]Examples:Using somebody else’s calling card numberUnauthorized selling of customer lists to telemarketers(example of an overlap of fraud with privacy breaches)Fraud can make systems more vulnerable to subsequent fraudNeed for protection mechanisms to avoid future damage3/23/0411Fraud Vulnerabilities (2)Fraudsters: [13]Impersonators illegitimate users who steal resources from victims (for instance by taking over their accounts)Swindlers legitimate users who intentionally benefit from the system or other users by deception (for instance, by obtaining legitimate telecommunications accounts and using them without paying bills)Fraud involves abuse of trust [12, 29]Fraudster strives to present himself as a trustworthy individual and friendThe more trust one places in others the more vulnerable one becomes3/23/0412Vulnerability Research Issues (1) Analyze severity of a vulnerability and its potential impact on an applicationQualitative impact analysisExpressed as a low/medium/high degree of performance/availability degradationQuantitative impactE.g., economic loss, measurable cascade effects, time to recoverProvide procedures and methods for efficient extraction of characteristics and properties of known vulnerabilitiesAnalogous to understanding how faults occurTools searching for known vulnerabilities in metabases can not anticipate attacker behaviorCharacteristics of high-risk vulnerabilities can be learnt from the behavior of attackers, using honeypots, etc.3/23/0413Vulnerability Research Issues (2)Construct comprehensive taxonomies of vulnerabilities for different application areasMedical systems may have critical privacy vulnerabilitiesVulnerabilities in defense systems compromise homeland securityPropose good taxonomies to facilitate both prevention and elimination of vulnerabilities Enhance metabases of vulnerabilities/incidentsReveals characteristics for preventing not only identical but also similar vulnerabilitiesContributes to identification of related vulnerabilities, including dangerous synergistic onesGood model for a set of synergistic vulnerabilities can lead to uncovering gang attack threats or incidents3/23/0414Vulnerability Research Issues (3)Provide models for vulnerabilities and their contexts The challenge: how vulnerability in one context propagates to anotherIf Dr. Smith is a high-risk driver, is he a trustworthy doctor?Different kinds of vulnerabilities emphasized in different contextsDevise quantitative lifecycle vulnerability models for a given type of application or systemExploit unique characteristics of vulnerabilities & application/systemIn each lifecycle phase: - determine most dangerous and common types of vulnerabilities - use knowledge of such types of vulnerabilities to prevent themBest defensive procedures adaptively selected from a predefined set3/23/0415Vulnerability Research Issues (4)The lifecycle models helps solving a few problemsAvoiding system vulnerabilities most efficientlyBy discovering & eliminating them at design and implementation stagesEvaluations/measurements of vulnerabilities at each lifecycle stageIn system components / subsystems / of the system as a wholeAssist in most efficient discovery of vulnerabilities before they are exploited by an attacker or a failureAssist in most efficient elimination / masking of vulnerabilities(e.g. based on principles analogous to fault-tolerance)OR:Keep an attacker unaware or uncertain of important system parameters(e.g., by using non-deterministic or deceptive system behavior, increased component diversity, or multiple lines of defense)3/23/0416Vulnerability Research Issues (5)Provide methods of assessing impact of vulnerabilities on security in applications & systemsCreate formal descriptions of the impact of vulnerabilitiesDevelop quantitative vulnerability impact evaluation methodsUse resulting ranking for threat/risk analysisIdentify the fundamental design principles and guidelines for dealing with system vulnerabilities at each lifecycle stagePropose best practices for reducing vulnerabilities at all lifecycle stages (based on the above principles and guidelines)Develop interactive or fully automatic tools and infrastructures encouraging or enforcing use of these best practicesOther issues:Investigate vulnerabilities in security mechanisms themselvesInvestigate vulnerabilities due to non-malicious but threat-enabling uses of information [21]3/23/0417OutlineVulnerabilitiesThreatsMechanisms to Reduce Vulnerabilities and Threats3.1. Applying Reliability and Fault Tolerance Principles to Security Research 3.2. Using Trust in Role-based Access Control 3.3. Privacy-preserving Data Dissemination 3.4. Fraud Countermeasure Mechanisms3/23/0418Threats - TopicsModels of ThreatsDealing with ThreatsThreat AvoidanceThreat ToleranceFraud Threat Detection for Threat Tolerance Fraud ThreatsThreat Research Issues3/23/0419Models of ThreatsThreats in security domain – like errors in reliability domainEntities that can intentionally exploit or inadvertently trigger specific system vulnerabilities to cause security breaches [16, 27]Attacks or accidents materialize threats (changing them from potential to actual)Attack - an intentional exploitation of vulnerabilitiesAccident - an inadvertent triggering of vulnerabilitiesThreat classifications: [26]Based on actions, we have: threats of illegal access, threats of destruction, threats of modification, and threats of emulationBased on consequences, we have: threats of disclosure, threats of (illegal) execution, threats of misrepresentation, and threats of repudiation3/23/0420Dealing with ThreatsDealing with threatsAvoid (prevent) threats in systemsDetect threatsEliminate threatsTolerate threatsDeal with threats based on degree of risk acceptable to applicationAvoid/eliminate threats to human lifeTolerate threats to noncritical or redundant components3/23/0421Dealing with Threats – Threat Avoidance (1)Design of threat avoidance techniques - analogous to fault avoidance (in reliability)Threat avoidance methods are frozen after system deploymentEffective only against less sophisticated attacksSophisticated attacks require adaptive schemes for threat tolerance [20]Attackers have motivation, resources, and the whole system lifetime to discover its vulnerabilities Can discover holes in threat avoidance methods 3/23/0422Dealing with Threats – Threat Avoidance (2)Understanding threat sourcesUnderstand threats by humans, their motivation and potential attack modes [27]Understand threats due to system faults and failuresExample design guidelines for preventing threats:Model for secure protocols [15]Formal models for analysis of authentication protocols [25, 10]Models for statistical databases to prevent data disclosures [1]3/23/0423Dealing with Threats – Threat ToleranceUseful features of fault-tolerant approachNot concerned with each individual failureDon’t spend all resources on dealing with individual failuresCan ignore transient and non-catastrophic errors and failuresNeed analogous intrusion-tolerant approachDeal with lesser and common security breachesE.g.: intrusion tolerance for database systems [3]Phase 1: attack detectionOptional (e.g., majority voting schemes don’t need detection)Phases 2-5: damage confinement, damage assessment, reconfiguration, continuation of servicecan be implicit (e.g., voting schemes follow the same procedure whether attacked or not)Phase 6: report attackto repair and fault treatment (to prevent a recurrence of similar attacks)3/23/0424Dealing with Threats – Fraud Threat Detection for Threat ToleranceFraud threat identification is neededFraud detection systemsWidely used in telecommunications, online transactions, insuranceEffective systems use both fraud rules and pattern analysis of user behaviorChallenge: a very high false alarm rateDue to the skewed distribution of fraud occurrences3/23/0425Fraud ThreatsAnalyze salient features of fraud threatsSome salient features of fraud threats [9] Fraud is often a malicious opportunistic reactionFraud escalation is a natural phenomenon Gang fraud can be especially damagingGang fraudsters can cooperate in misdirecting suspicion on othersIndividuals/gangs planning fraud thrive in fuzzy environmentsUse fuzzy assignments of responsibilities to participating entitiesPowerful fraudsters create environments that facilitate fraud E.g.: CEO’s involved in insider trading3/23/0426Threat Research Issues (1)Analysis of known threats in contextIdentify (in metabases) known threats relevant for the contextFind salient features of these threats and associations between themThreats can be associated also via their links to related vulnerabilitiesInfer threat features from features of vulnerabilities related to themBuild a threat taxonomy for the considered contextPropose qualitative and quantitative models of threats in contextIncluding lifecycle threat modelsDefine measures to determine threat levelsDevise techniques for avoiding/tolerating threats via unpredictability or non-determinismDetecting known threatsDiscovering unknown threats3/23/0427Threat Research Issues (2)Develop quantitative threat models using analogies to reliability modelsE.g., rate threats or attacks using time and effort random variablesDescribe the distribution of their random behaviorMean Effort To security Failure (METF)Analogous to Mean Time To Failure (MTTF) reliability measureMean Time To Patch and Mean Effort To Patch (new security measures)Analogous to Mean Time To Repair (MTTR) reliability measure and METF security measure, respectivelyPropose evaluation methods for threat impactsMere threat (a potential for attack) has its impactConsider threat properties: direct damage, indirect damage, recovery cost, prevention overheadConsider interaction with other threats and defensive mechanisms3/23/0428Threat Research Issues (3)Invent algorithms, methods, and design guidelines to reduce number and severity of threatsConsider injection of unpredictability or uncertainty to reduce threatsE.g., reduce data transfer threats by sending portions of critical data through different routesInvestigate threats to security mechanisms themselvesStudy threat detectionIt might be needed for threat toleranceIncludes investigation of fraud threat detection3/23/0429Products, Services and Research Programs for Industry (1)There are numerous commercial products and services, and some free products and services Examples follow.Notation used below: Product (Organization)Example vulnerability and incident metabasesCVE (Mitre), ICAT (NIST), OSVDB (osvdb.com), Apache Week Web Server (Red Hat), Cisco Secure Encyclopedia (Cisco), DOVESComputer Security Laboratory (UC Davis), DragonSoft Vulnerability Database (DragonSoft Security Associates), Secunia Security Advisories (Secunia), SecurityFocus Vulnerability Database (Symantec), SIOS (Yokogawa Electric Corp.), Verletzbarkeits-Datenbank (scip AG), Vigil@nce AQL (Alliance Qualité Logiciel)Example vulnerability notification systemsCERT (SEI-CMU), Cassandra (CERIAS-Purdue), ALTAIR (esCERT-UPC), DeepSight Alert Services (Symantec), Mandrake Linux Security Advisories (MandrakeSoft)Example other tools (1)Vulnerability Assessment Tools (for databases, applications, web applications, etc.)AppDetective (Application Security), NeoScanner@ESM (Inzen), AuditPro for SQL Server (Network Intelligence India Pvt. Ltd.), eTrust Policy Compliance (Computer Associates), Foresight (Cubico Solutions CC), IBM Tivoli Risk Manager (IBM), Internet Scanner (Internet Security Systems), NetIQ Vulnerability Manager (NetIQ), N-Stealth (N-Stalker), QualysGuard (Qualys), Retina Network Security Scannere (Eye Digital Security), SAINT (SAINT Corp.), SARA (Advanced Research Corp.), STAT-Scanner (Harris Corp.), StillSecure VAM (StillSecure), Symantec Vulnerability Assessment (Symantec)Automated Scanning Tools, Vulnerability ScannersAutomated Scanning (Beyond Security Ltd.), ipLegion/intraLegion (E*MAZE Networks), Managed Vulnerability Assessment (LURHQ Corp.), Nessus Security Scanner (The Nessus Project), NeVO (Tenable Network Security)3/23/0430Products, Services and Research Programs for Industry (2)Example other tools (2)Vulnerability und Penetration TestingAttack Tool Kit (Computec.ch), CORE IMPACT (Core Security Technologies), LANPATROL (Network Security Syst.)Intrusion Detection SystemCisco Secure IDS (Cisco), Cybervision Intrusion Detection System (Venus Information Technology), Dragon Sensor (Enterasys Networks), McAfee IntruShield (IDSMcAfee), NetScreen-IDP (NetScreen Technologies), Network Box Internet Threat Protection Device (Network Box Corp.)Threat Management SystemsSymantec ManHunt (Symantec)Example servicesVulnerability Scanning ServicesNetcraft Network Examination Service (Netcraft Ltd.)Vulnerability Assessment and Risk Analysis ServicesActiveSentry (Intranode), Risk Analysis Subscription Service (Strongbox Security), SecuritySpace Security Audits (E-Soft), Westpoint Enterprise Scan (Westpoint Ltd.)Threat NotificationTruSecure IntelliSHIELD Alert Manager (TruSecure Corp.)PathchesSoftware Security Updates (Microsoft)More on metabases/tools/services: Example Research ProgramsMicrosoft Trustworthy Computing (Security, Privacy, Reliability, Business Integrity)IBMAlmaden: information security; Zurich: information security, privacy, and cryptography; Secure Systems Department; Internet Security group; Cryptography Research Group 3/23/0431OutlineVulnerabilitiesThreatsMechanisms to Reduce Vulnerabilities and Threats3.1. Applying Reliability and Fault Tolerance Principles to Security Research 3.2. Using Trust in Role-based Access Control 3.3. Privacy-preserving Data Dissemination 3.4. Fraud Countermeasure Mechanisms3/23/0432Applying Reliability Principles to Security Research (1)Apply the science and engineering from Reliability to Security [6]Analogies in basic notions [6, 7]Fault – vulnerabilityError (enabled by a fault) – threat (enabled by a vulnerability)Failure/crash (materializes a fault, consequence of an error) – Security breach (materializes a vulnerability, consequence of a threat)Time - effort analogies: [18] time-to-failure distribution for accidental failures – expended effort-to-breach distribution for intentional security breachesThis is not a “direct” analogy: it considers important differences between Reliability and SecurityMost important: intentional human factors in Security3/23/0433Applying Reliability Principles to Security Research (2)Analogies from fault avoidance/tolerance [27]Fault avoidance - threat avoidanceFault tolerance - threat tolerance (gracefully adapts to threats that have materialized)Maybe threat avoidance/tolerance should be named: vulnerability avoidance/tolerance (to be consistent with the vulnerability - fault analogy)Analogy: To deal with failures, build fault-tolerant systems To deal with security breaches, build threat-tolerant systems3/23/0434Applying Reliability Principles to Security Research (3)Examples of solutions using fault tolerance analogiesVoting and quorumsTo increase reliability - require a quorum of voting replicas To increase security - make forming voting quorums more difficultThis is not a “direct” analogy but a kind of its “reversal”Checkpointing applied to intrusion detectionTo increase reliability – use checkpoints to bring system back to a reliable (e.g., transaction consistent) stateTo increase security - use checkpoints to bring system back to a secure stateAdaptability / self-healingAdapt to common and less severe security breaches as we adapt to every-day and relatively benign failuresAdapt to: timing / severity / duration / extent of a security breach3/23/0435Applying Reliability Principles to Security Research (4)Beware: Reliability analogies are not always helpfulDifferences between seemingly identical notionsE.g., “system boundaries” are less open for Reliability than for SecurityNo simple analogies exist for intentional security breaches arising from planted malicious faultsIn such cases, analogy of time (Reliability) to effort (Security) is meaninglessE.g., sequential time vs. non-sequential effortE.g., long time duration vs. “nearly instantaneous” effortNo simple analogies exist when attack efforts are concentrated in timeAs before, analogy of time to effort is meaningless3/23/0436OutlineVulnerabilitiesThreatsMechanisms to Reduce Vulnerabilities and Threats3.1. Applying Reliability and Fault Tolerance Principles to Security Research 3.2. Using Trust in Role-based Access Control 3.3. Privacy-preserving Data Dissemination 3.4. Fraud Countermeasure Mechanisms3/23/0437Basic Idea - Using Trust in Role-based Access Control (RBAC)Traditional identity-based approaches to access control are inadequateDon’t fit open computing, incl. Internet-based computing [28]Idea: Use trust to enhance user authentication and authorizationEnhance role-based access control (RBAC)Use trust in addition to traditional credentialsTrust based on user behaviorTrust is related to vulnerabilities and threatsTrustworthy users:Don’t exploit vulnerabilities Don’t become threats3/23/0438Overview - Using Trust in RBAC (1)Trust-enhanced role-mapping (TERM) server added to a system with RBACCollect and use evidence related to trustworthiness of user behaviorFormalize evidence type, evidenceDifferent forms of evidence must be accommodatedEvidence statement: incl. evidence and opinionOpinion tells how much the evidence provider trust the evidence he provides3/23/0439Overview - Using Trust in RBAC (2)TERM architecture includes:Algorithm to evaluate credibility of evidenceBased on its associated opinion and evidence about trustworthiness of the opinion issuer’sDeclarative language to define role assignment policiesAlgorithm to assign roles to usersBased on role assignment policies and evidence statementsAlgorithm to continuously update trustworthiness ratings for userIts output is used to grant or disallow access requestTrustworthiness ratings for a recommender is affected by trustworthiness ratings of all users he recommended3/23/0440Overview - Using Trust in RBAC (3)A prototype TERM serverSoftware available at: details on “Using Trust in RBAC”available in the extended version of this presentation at: www.cs.purdue.edu/people/bb#colloquia3/23/0441OutlineVulnerabilitiesThreatsMechanisms to Reduce Vulnerabilities and Threats3.1. Applying Reliability and Fault Tolerance Principles to Security Research 3.2. Using Trust in Role-based Access Control 3.3. Privacy-preserving Data Dissemination 3.4. Fraud Countermeasure Mechanisms3/23/0442Basic Terms - Privacy-preserving Data Dissemination“Guardian:” Entity entrusted by private data owners with collection, storage, or transfer of their data owner can be a guardian for its own private dataowner can be an institution or a computing systemGuardians allowed or required by law to share private dataWith owner’s explicit consentWithout the consent as required by lawresearch, court order, etc.“Data”(Private Data)Guardian 2Second LevelGuardian 1 Original GuardianGuardian 3Guardian 5Third-levelGuardian 6Guardian 4“Owner”(Private Data Owner)3/23/0443Problem of Privacy PreservationGuardian passes private data to another guardian in a data dissemination chainChain within a graph (possibly cyclic)Owner privacy preferences not transmitted due to neglect or failureRisk grows with chain length and milieu fallibility and hostilityIf preferences lost, receiving guardian unable to honor them3/23/0444ChallengesEnsuring that owner’s metadata are never decoupled from his dataMetadata include owner’s privacy preferencesEfficient protection in a hostile milieuThreats - examplesUncontrolled data disseminationIntentional or accidental data corruption, substitution, or disclosureDetection of a loss of data or metadataEfficient recovery of data and metadataRecovery by retransmission from the original guardian is most trustworthy3/23/0445Overview - Privacy-preserving Data DisseminationUse bundles to make data and metadata inseparable bundle = self-descriptive private data + its metadataE.g., encrypt or obfuscate bundle to prevent separationEach bundle includes mechanism for apoptosis apoptosis = clean self-destructionBundle chooses apoptosis when threatened with a successful hostile attackDevelop distance-based evaporation of bundles E.g., the more “distant” from it owner is a bundle, the more it evaporates (becoming more distorted) More details on “Privacy-preserving Data Dissemination” available in the extended version of this presentationat: www.cs.purdue.edu/people/bb#colloquia3/23/0446OutlineVulnerabilitiesThreatsMechanisms to Reduce Vulnerabilities and Threats3.1. Applying Reliability and Fault Tolerance Principles to Security Research 3.2. Using Trust in Role-based Access Control 3.3. Privacy-preserving Data Dissemination 3.4. Fraud Countermeasure Mechanisms3/23/0447Overview - Fraud Countermeasure Mechanisms (1)System monitors user behaviorSystem decides whether user’s behavior qualifies as fraudulentThree types of fraudulent behavior identified:“Uncovered deceiving intention”User misbehaves all the time“Trapping intention”User behaves well at first, then commits fraud“Illusive intention”User exhibits cyclic behavior: longer periods of proper behavior separated by shorter periods of misbehavior3/23/0448Overview - Fraud Countermeasure Mechanisms (2)System architecture for swindler detectionProfile-based anomaly detectorMonitors suspicious actions searching for identified fraudulent behavior patternsState transition analysisProvides state description when an activity results in entering a dangerous stateDeceiving intention predictorDiscovers deceiving intention based on satisfaction ratingsDecision makingDecides whether to raise fraud alarm when deceiving pattern is discovered3/23/0449Overview - Fraud Countermeasure Mechanisms (3)Performed experiments validated the architectureAll three types of fraudulent behavior were quickly detectedMore details on “Fraud Countermeasure Mechanisms”available in the extended version of this presentation at: www.cs.purdue.edu/people/bb#colloqia3/23/0450SummaryPresented:VulnerabilitiesThreatsMechanisms to Reduce Vulnerabilities and Threats3.1. Applying Reliability and Fault Tolerance Principles to Security Research 3.2. Using Trust in Role-based Access Control 3.3. Privacy-preserving Data Dissemination 3.4. Fraud Countermeasure Mechanisms3/23/0451ConclusionsExciting area of research20 years of research in Reliability can form a basis for vulnerability and threat studies in SecurityNeed to quantify threats, risks, and potential impacts on distributed applications. Do not be terrorized and act scaredAdapt and use resources to deal with different threat levelsGovernment, industry, and the public are interested in progress in this research3/23/0452References (1)N.R. Adam and J.C. Wortmann, “Security-Control Methods for Statistical Databases: A Comparative Study,” ACM Computing Surveys, Vol. 21, No. 4, Dec. 1989.The American Heritage Dictionary of the English Language, Fourth Edition, Houghton Mifflin, 2000.P. Ammann, S. Jajodia, and P. Liu, “A Fault Tolerance Approach to Survivability,” in Computer Security, Dependability, and Assurance: From Needs to Solutions, IEEE Computer Society Press, Los Alamitos, CA, 1999.W.A. Arbaugh, et al., “Windows of Vulnerability: A Case Study Analysis,” IEEE Computer, pp. 52-59, Vol. 33 (12), Dec. 2000.A. Avizienis, J.C. Laprie, and B. Randell, “Fundamental Concepts of Dependability,” Research Report N01145, LAAS-CNRS, Apr. 2001.A. Bhargava and B. Bhargava, “Applying fault-tolerance principles to security research,” in Proc. of IEEE Symposium on Reliable Distributed Systems, New Orleans, Oct. 2001. B. Bhargava, “Security in Mobile Networks,” in NSF Workshop on Context-Aware Mobile Database Management (CAMM), Brown University, Jan. 2002.B. Bhargava (ed.), Concurrency Control and Reliability in Distributed Systems, Van Nostrand Reinhold, 1987.B. Bhargava, “Vulnerabilities and Fraud in Computing Systems,” Proc. Intl. Conf. IPSI, Sv. Stefan, Serbia and Montenegro, Oct. 2003.B. Bhargava, S. Kamisetty and S. Madria, “Fault-tolerant authentication and group key management in mobile computing,” Intl. Conf. on Internet Comp., Las Vegas, June 2000.B. Bhargava and L. Lilien, “Private and Trusted Collaborations,” Proc. Secure Knowledge Management (SKM 2004): A Workshop, Amherst, NY, Sep. 2004.3/23/0453References (2)B. Bhargava and Y. Zhong, “Authorization Based on Evidence and Trust,” Proc. Intl. Conf. on Data Warehousing and Knowledge Discovery DaWaK-2002, Aix-en-Provence, France, Sep. 2002.B. Bhargava, Y. Zhong, and Y. Lu, "Fraud Formalization and Detection,” Proc. Intl. Conf. on Data Warehousing and Knowledge Discovery DaWaK-2003, Prague, Czechia, Sep. 2003.M. Dacier, Y. Deswarte, and M. Kaâniche, “Quantitative Assessment of Operational Security: Models and Tools,” Technical Report, LAAS Report 96493, May 1996.N. Heintze and J.D. Tygar, “A Model for Secure Protocols and Their Compositions,” IEEE Transactions on Software Engineering, Vol. 22, No. 1, 1996, pp. 16-30.E. Jonsson et al., “On the Functional Relation Between Security and Dependability Impairments,” Proc. 1999 Workshop on New Security Paradigms, Sep. 1999, pp. 104-111.I. Krsul, E.H. Spafford, and M. Tripunitara, “Computer Vulnerability Analysis,” Technical Report, COAST TR 98-07, Dept. of Computer Sciences, Purdue University, 1998.B. Littlewood at al., “Towards Operational Measures of Computer Security”, Journal of Computer Security, Vol. 2, 1993, pp. 211-229.F. Maymir-Ducharme, P.C. Clements, K. Wallnau, and R. W. Krut, “The Unified Information Security Architecture,” Technical Report, CMU/SEI-95-TR-015, Oct. 1995.N.R. Mead, R.J. Ellison, R.C. Linger, T. Longstaff, and J. McHugh, “Survivable Network Analysis Method,” Tech. Rep. CMU/SEI-2000-TR-013, Pittsburgh, PA, Sep. 2000.C. Meadows, “Applying the Dependability Paradigm to Computer Security,” Proc. Workshop on New Security Paradigms, Sep. 1995, pp. 75-81.3/23/0454Reference (3)P.C. Meunier and E.H. Spafford, “Running the free vulnerability notification system Cassandra,” Proc. 14th Annual Computer Security Incident Handling Conference, Hawaii, Jan. 2002.C. R. Ramakrishnan and R. Sekar, “Model-Based Analysis of Configuration Vulnerabilities,” Proc. Second Intl. Workshop on Verification, Model Checking, and Abstract Interpretation (VMCAI’98), Pisa, Italy, 2000.B. Randell, “Dependability—a Unifying Concept,” in: Computer Security, Dependability, and Assurance: From Needs to Solutions, IEEE Computer Society Press, Los Alamitos, CA, 1999.A.D. Rubin and P. Honeyman, “Formal Methods for the Analysis of Authentication Protocols,” Tech. Rep. 93-7, Dept. of Electrical Engineering and Computer Science, University of Michigan, Nov. 1993.G. Song et al., “CERIAS Classic Vulnerability Database User Manual,” Technical Report 2000-17, CERIAS, Purdue University, West Lafayette, IN, 2000.G. Stoneburner, A. Goguen, and A. Feringa, “Risk Management Guide for Information Technology Systems,” NIST Special Publication 800-30, Washington, DC, 2001.M. Winslett et al., “Negotiating trust on the web,” IEEE Internet Computing Spec. Issue on Trust Management, 6(6), Nov. 2002.Y. Zhong, Y. Lu, and B. Bhargava, “Dynamic Trust Production Based on Interaction Sequence,” Tech. Rep. CSD-TR 03-006, Dept. Comp. Sciences, Purdue Univ., Mar.2003.The extended version of this presentation available at: www.cs.purdue.edu/people/bb#colloqia3/23/0455Thank you!3/23/0456

Các file đính kèm theo tài liệu này:

  • ppticdcit_presentation_vuln_and_threats_7858.ppt
Tài liệu liên quan