• Stars
    star
    1
  • Language
  • Created almost 4 years ago
  • Updated almost 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Username:

More Repositories

1

neurons

An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit information, a "signal", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called "edges". Artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times. The original goal of the ANN approach was to solve problems in the same way that a human brain would. However, over time, attention moved to performing specific tasks, leading to deviations from biology. Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis. Deep learning consists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition.[68] Decision trees Main article: Decision tree learning Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making. Support vector machines Main article: Support vector machines Support vector machines (SVMs), also known as support vector networks, are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other.[69] An SVM training algorithm is a non-probabilistic, binary, linear classifier, although methods such as Platt scaling exist to use SVM in a probabilistic classification setting. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces. Illustration of linear regression on a data set. Regression analysis Main article: Regression analysis Regression analysis encompasses a large variety of statistical methods to estimate the relationship between input variables and their associated features. Its most common form is linear regression, where a single line is drawn to best fit the given data according to a mathematical criterion such as ordinary least squares. The latter is often extended by regularization (mathematics) methods to mitigate overfitting and bias, as in ridge regression. When dealing with non-linear problems, go-to models include polynomial regression (for example, used for trendline fitting in Microsoft Excel[70]), logistic regression (often used in statistical classification) or even kernel regression, which introduces non-linearity by taking advantage of the kernel trick to implicitly map input variables to higher-dimensional space. Bayesian networks Main article: Bayesian network A simple Bayesian network. Rain influences whether the sprinkler is activated, and both rain and the sprinkler influence whether the grass is wet. A Bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that perform inference and learning. Bayesian networks that model sequences of variables, like speech signals or protein sequences, are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams. Genetic algorithms Main article: Genetic algorithm A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s.[71][72] Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms.[73] Training models Usually, machine learning models require a lot of data in order for them to perform well. Usually, when training a machine learning model, one needs to collect a large, representative sample of data from a training set. Data from the training set can be as varied as a corpus of text, a collection of images, and data collected from individual users of a service. Overfitting is something to watch out for when training a machine learning model. Federated learning Main article: Federated learning Federated learning is an adapted form of distributed artificial intelligence to training machine learning models that decentralizes the training process, allowing for users' privacy to be maintained by not needing to send their data to a centralized server. This also increases efficiency by decentralizing the training process to many devices. For example, Gboard uses federated machine learning to train search query prediction models on users' mobile phones without having to send individual searches back to Google.[74] Applications There are many applications for machine learning, including: Agriculture Anatomy Adaptive websites Affective computing Banking Bioinformatics Brain–machine interfaces Cheminformatics Citizen science Computer networks Computer vision Credit-card fraud detection Data quality DNA sequence classification Economics Financial market analysis[75] General game playing Handwriting recognition Information retrieval Insurance Internet fraud detection Linguistics Machine learning control Machine perception Machine translation Marketing Medical diagnosis Natural language processing Natural language understanding Online advertising Optimization Recommender systems Robot locomotion Search engines Sentiment analysis Sequence mining Software engineering Speech recognition Structural health monitoring Syntactic pattern recognition Telecommunication Theorem proving Time series forecasting User behavior analytics In 2006, the media-services provider Netflix held the first "Netflix Prize" competition to find a program to better predict user preferences and improve the accuracy of its existing Cinematch movie recommendation algorithm by at least 10%. A joint team made up of researchers from AT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built an ensemble model to win the Grand Prize in 2009 for $1 million.[76] Shortly after the prize was awarded, Netflix realized that viewers' ratings were not the best indicators of their viewing patterns ("everything is a recommendation") and they changed their recommendation engine accordingly.[77] In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of machine learning to predict the financial crisis.[78] In 2012, co-founder of Sun Microsystems, Vinod Khosla, predicted that 80% of medical doctors' jobs would be lost in the next two decades to automated machine learning medical diagnostic software.[79] In 2014, it was reported that a machine learning algorithm had been applied in the field of art history to study fine art paintings and that it may have revealed previously unrecognized influences among artists.[80] In 2019 Springer Nature published the first research book created using machine learning.[81] Limitations Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results.[82][83][84] Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems.[85] In 2018, a self-driving car from Uber failed to detect a pedestrian, who was killed after a collision.[86] Attempts to use machine learning in healthcare with the IBM Watson system failed to deliver even after years of time and billions of dollars invested.[87][88] Bias Main article: Algorithmic bias Machine learning approaches in particular can suffer from different data biases. A machine learning system trained on current customers only may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on man-made data, machine learning is likely to pick up the same constitutional and unconscious biases already present in society.[89] Language models learned from data have been shown to contain human-like biases.[90][91] Machine learning systems used for criminal risk assessment have been found to be biased against black people.[92][93] In 2015, Google photos would often tag black people as gorillas,[94] and in 2018 this still was not well resolved, but Google reportedly was still using the workaround to remove all gorillas from the training data, and thus was not able to recognize real gorillas at all.[95] Similar issues with recognizing non-white people have been found in many other systems.[96] In 2016, Microsoft tested a chatbot that learned from Twitter, and it quickly picked up racist and sexist language.[97] Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains.[98] Concern for fairness in machine learning, that is, reducing bias in machine learning and propelling its use for human good is increasingly expressed by artificial intelligence scientists, including Fei-Fei Li, who reminds engineers that "There’s nothing artificial about AI...It’s inspired by people, it’s created by people, and—most importantly—it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility.”[99] Model assessments Classification of machine learning models can be validated by accuracy estimation techniques like the holdout method, which splits the data in a training and test set (conventionally 2/3 training set and 1/3 test set designation) and evaluates the performance of the training model on the test set. In comparison, the K-fold-cross-validation method randomly partitions the data into K subsets and then K experiments are performed each respectively considering 1 subset for evaluation and the remaining K-1 subsets for training the model. In addition to the holdout and cross-validation methods, bootstrap, which samples n instances with replacement from the dataset, can be used to assess model accuracy.[100] In addition to overall accuracy, investigators frequently report sensitivity and specificity meaning True Positive Rate (TPR) and True Negative Rate (TNR) respectively. Similarly, investigators sometimes report the false positive rate (FPR) as well as the false negative rate (FNR). However, these rates are ratios that fail to reveal their numerators and denominators. The total operating characteristic (TOC) is an effective method to express a model's diagnostic ability. TOC shows the numerators and denominators of the previously mentioned rates, thus TOC provides more information than the commonly used receiver operating characteristic (ROC) and ROC's associated area under the curve (AUC).[101] Ethics Machine learning poses a host of ethical questions. Systems which are trained on datasets collected with biases may exhibit these biases upon use (algorithmic bias), thus digitizing cultural prejudices.[102] For example, using job hiring data from a firm with racist hiring policies may lead to a machine learning system duplicating the bias by scoring job applicants against similarity to previous successful applicants.[103][104] Responsible collection of data and documentation of algorithmic rules used by a system thus is a critical part of machine learning. Because human languages contain biases, machines trained on language corpora will necessarily also learn these biases.[105][106] Other forms of ethical challenges, not related to personal biases, are more seen in health care. There are concerns among health care professionals that these systems might not be designed in the public's interest but as income-generating machines. This is especially true in the United States where there is a long-standing ethical dilemma of improving health care, but also increasing profits. For example, the algorithms could be designed to provide patients with unnecessary tests or medication in which the algorithm's proprietary owners hold stakes. There is huge potential for machine learning in health care to provide professionals a great tool to diagnose, medicate, and even plan recovery paths for patients, but this will not happen until the personal biases mentioned previously, and these "greed" biases are addressed.[107] Hardware Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks (a particular narrow subdomain of machine learning) that contain many layers of non-linear hidden units.[108] By 2019, graphic processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI.[109] OpenAI estimated the hardware compute used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017), and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months.[110][111] Software Software suites containing a variety of machine learning algorithms include the following: Free and open-source so
56
star
2

References

Poole, Mackworth & Goebel 1998, p. 1. Russell & Norvig 2003, p. 55. Definition of AI as the study of intelligent agents: Poole, Mackworth & Goebel (1998), which provides the version that is used in this article. These authors use the term "computational intelligence" as a synonym for artificial intelligence.[1] Russell & Norvig (2003) (who prefer the term "rational agent") and write "The whole-agent view is now widely accepted in the field".[2] Nilsson 1998 Legg & Hutter 2007 Russell & Norvig 2009, p. 2. McCorduck 2004, p. 204 Maloof, Mark. "Artificial Intelligence: An Introduction, p. 37" (PDF). georgetown.edu. Archived (PDF) from the original on 25 August 2018. "How AI Is Getting Groundbreaking Changes In Talent Management And HR Tech". Hackernoon. Archived from the original on 11 September 2019. Retrieved 14 February 2020. Schank, Roger C. (1991). "Where's the AI". AI magazine. Vol. 12 no. 4. p. 38. Russell & Norvig 2009. "AlphaGo – Google DeepMind". Archived from the original on 10 March 2016. Allen, Gregory (April 2020). "Department of Defense Joint AI Center - Understanding AI Technology" (PDF). AI.mil - The official site of the Department of Defense Joint Artificial Intelligence Center. Archived (PDF) from the original on 21 April 2020. Retrieved 25 April 2020. Optimism of early AI: * Herbert Simon quote: Simon 1965, p. 96 quoted in Crevier 1993, p. 109. * Marvin Minsky quote: Minsky 1967, p. 2 quoted in Crevier 1993, p. 109. Boom of the 1980s: rise of expert systems, Fifth Generation Project, Alvey, MCC, SCI: * McCorduck 2004, pp. 426–441 * Crevier 1993, pp. 161–162,197–203, 211, 240 * Russell & Norvig 2003, p. 24 * NRC 1999, pp. 210–211 * Newquist 1994, pp. 235–248 First AI Winter, Mansfield Amendment, Lighthill report * Crevier 1993, pp. 115–117 * Russell & Norvig 2003, p. 22 * NRC 1999, pp. 212–213 * Howe 1994 * Newquist 1994, pp. 189–201 Second AI winter: * McCorduck 2004, pp. 430–435 * Crevier 1993, pp. 209–210 * NRC 1999, pp. 214–216 * Newquist 1994, pp. 301–318 AI becomes hugely successful in the early 21st century * Clark 2015 Pamela McCorduck (2004, p. 424) writes of "the rough shattering of AI in subfields—vision, natural language, decision theory, genetic algorithms, robotics ... and these with own sub-subfield—that would hardly have anything to say to each other." This list of intelligent traits is based on the topics covered by the major AI textbooks, including: * Russell & Norvig 2003 * Luger & Stubblefield 2004 * Poole, Mackworth & Goebel 1998 * Nilsson 1998 Kolata 1982. Maker 2006. Biological intelligence vs. intelligence in general: Russell & Norvig 2003, pp. 2–3, who make the analogy with aeronautical engineering. McCorduck 2004, pp. 100–101, who writes that there are "two major branches of artificial intelligence: one aimed at producing intelligent behavior regardless of how it was accomplished, and the other aimed at modeling intelligent processes found in nature, particularly human ones." Kolata 1982, a paper in Science, which describes McCarthy's indifference to biological models. Kolata quotes McCarthy as writing: "This is AI, so we don't care if it's psychologically real".[19] McCarthy recently reiterated his position at the AI@50 conference where he said "Artificial intelligence is not, by definition, simulation of human intelligence".[20]. Neats vs. scruffies: * McCorduck 2004, pp. 421–424, 486–489 * Crevier 1993, p. 168 * Nilsson 1983, pp. 10–11 Symbolic vs. sub-symbolic AI: * Nilsson (1998, p. 7), who uses the term "sub-symbolic". General intelligence (strong AI) is discussed in popular introductions to AI: * Kurzweil 1999 and Kurzweil 2005 See the Dartmouth proposal, under Philosophy, below. McCorduck 2004, p. 34. McCorduck 2004, p. xviii. McCorduck 2004, p. 3. McCorduck 2004, pp. 340–400. This is a central idea of Pamela McCorduck's Machines Who Think. She writes: "I like to think of artificial intelligence as the scientific apotheosis of a venerable cultural tradition."[26] "Artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized."[27] "Our history is full of attempts—nutty, eerie, comical, earnest, legendary and real—to make artificial intelligences, to reproduce what is the essential us—bypassing the ordinary means. Back and forth between myth and reality, our imaginations supplying what our workshops couldn't, we have engaged for a long time in this odd form of self-reproduction."[28] She traces the desire back to its Hellenistic roots and calls it the urge to "forge the Gods."[29] "Stephen Hawking believes AI could be mankind's last accomplishment". BetaNews. 21 October 2016. Archived from the original on 28 August 2017. Lombardo P, Boehm I, Nairz K (2020). "RadioComics – Santa Claus and the future of radiology". Eur J Radiol. 122 (1): 108771. doi:10.1016/j.ejrad.2019.108771. PMID 31835078. Ford, Martin; Colvin, Geoff (6 September 2015). "Will robots create more jobs than they destroy?". The Guardian. Archived from the original on 16 June 2018. Retrieved 13 January 2018. AI applications widely used behind the scenes: * Russell & Norvig 2003, p. 28 * Kurzweil 2005, p. 265 * NRC 1999, pp. 216–222 * Newquist 1994, pp. 189–201 AI in myth: * McCorduck 2004, pp. 4–5 * Russell & Norvig 2003, p. 939 AI in early science fiction. * McCorduck 2004, pp. 17–25 Formal reasoning: * Berlinski, David (2000). The Advent of the Algorithm. Harcourt Books. ISBN 978-0-15-601391-8. OCLC 46890682. Archived from the original on 26 July 2020. Retrieved 22 August 2020. Turing, Alan (1948), "Machine Intelligence", in Copeland, B. Jack (ed.), The Essential Turing: The ideas that gave birth to the computer age, Oxford: Oxford University Press, p. 412, ISBN 978-0-19-825080-7 Russell & Norvig 2009, p. 16. Dartmouth conference: * McCorduck 2004, pp. 111–136 * Crevier 1993, pp. 47–49, who writes "the conference is generally recognized as the official birthdate of the new science." * Russell & Norvig 2003, p. 17, who call the conference "the birth of artificial intelligence." * NRC 1999, pp. 200–201 McCarthy, John (1988). "Review of The Question of Artificial Intelligence". Annals of the History of Computing. 10 (3): 224–229., collected in McCarthy, John (1996). "10. Review of The Question of Artificial Intelligence". Defending AI Research: A Collection of Essays and Reviews. CSLI., p. 73, "[O]ne of the reasons for inventing the term "artificial intelligence" was to escape association with "cybernetics". Its concentration on analog feedback seemed misguided, and I wished to avoid having either to accept Norbert (not Robert) Wiener as a guru or having to argue with him." Hegemony of the Dartmouth conference attendees: * Russell & Norvig 2003, p. 17, who write "for the next 20 years the field would be dominated by these people and their students." * McCorduck 2004, pp. 129–130 Russell & Norvig 2003, p. 18. Schaeffer J. (2009) Didn't Samuel Solve That Game?. In: One Jump Ahead. Springer, Boston, MA Samuel, A. L. (July 1959). "Some Studies in Machine Learning Using the Game of Checkers". IBM Journal of Research and Development. 3 (3): 210–229. CiteSeerX 10.1.1.368.2254. doi:10.1147/rd.33.0210. "Golden years" of AI (successful symbolic reasoning programs 1956–1973): * McCorduck 2004, pp. 243–252 * Crevier 1993, pp. 52–107 * Moravec 1988, p. 9 * Russell & Norvig 2003, pp. 18–21 The programs described are Arthur Samuel's checkers program for the IBM 701, Daniel Bobrow's STUDENT, Newell and Simon's Logic Theorist and Terry Winograd's SHRDLU. DARPA pours money into undirected pure research into AI during the 1960s: * McCorduck 2004, p. 131 * Crevier 1993, pp. 51, 64–65 * NRC 1999, pp. 204–205 AI in England: * Howe 1994 Lighthill 1973. Expert systems: * ACM 1998, I.2.1 * Russell & Norvig 2003, pp. 22–24 * Luger & Stubblefield 2004, pp. 227–331 * Nilsson 1998, chpt. 17.4 * McCorduck 2004, pp. 327–335, 434–435 * Crevier 1993, pp. 145–62, 197–203 * Newquist 1994, pp. 155–183 Mead, Carver A.; Ismail, Mohammed (8 May 1989). Analog VLSI Implementation of Neural Systems (PDF). The Kluwer International Series in Engineering and Computer Science. 80. Norwell, MA: Kluwer Academic Publishers. doi:10.1007/978-1-4613-1639-8. ISBN 978-1-4613-1639-8. Archived from the original (PDF) on 6 November 2019. Retrieved 24 January 2020. Formal methods are now preferred ("Victory of the neats"): * Russell & Norvig 2003, pp. 25–26 * McCorduck 2004, pp. 486–487 McCorduck 2004, pp. 480–483. Markoff 2011. "Ask the AI experts: What's driving today's progress in AI?". McKinsey & Company. Archived from the original on 13 April 2018. Retrieved 13 April 2018. Administrator. "Kinect's AI breakthrough explained". i-programmer.info. Archived from the original on 1 February 2016. Rowinski, Dan (15 January 2013). "Virtual Personal Assistants & The Future Of Your Smartphone [Infographic]". ReadWrite. Archived from the original on 22 December 2015. "Artificial intelligence: Google's AlphaGo beats Go master Lee Se-dol". BBC News. 12 March 2016. Archived from the original on 26 August 2016. Retrieved 1 October 2016. Metz, Cade (27 May 2017). "After Win in China, AlphaGo's Designers Explore New AI". Wired. Archived from the original on 2 June 2017. "World's Go Player Ratings". May 2017. Archived from the original on 1 April 2017. "柯洁迎19岁生日 雄踞人类世界排名第一已两年" (in Chinese). May 2017. Archived from the original on 11 August 2017. Clark, Jack (8 December 2015). "Why 2015 Was a Breakthrough Year in Artificial Intelligence". Bloomberg News. Archived from the original on 23 November 2016. Retrieved 23 November 2016. After a half-decade of quiet breakthroughs in artificial intelligence, 2015 has been a landmark year. Computers are smarter and learning faster than ever. "Reshaping Business With Artificial Intelligence". MIT Sloan Management Review. Archived from the original on 19 May 2018. Retrieved 2 May 2018. Lorica, Ben (18 December 2017). "The state of AI adoption". O'Reilly Media. Archived from the original on 2 May 2018. Retrieved 2 May 2018. Allen, Gregory (6 February 2019). "Understanding China's AI Strategy". Center for a New American Security. Archived from the original on 17 March 2019. "Review | How two AI superpowers – the U.S. and China – battle for supremacy in the field". Washington Post. 2 November 2018. Archived from the original on 4 November 2018. Retrieved 4 November 2018. at 10:11, Alistair Dabbs 22 Feb 2019. "Artificial Intelligence: You know it isn't real, yeah?". www.theregister.co.uk. Archived from the original on 21 May 2020. Retrieved 22 August 2020. "Stop Calling it Artificial Intelligence". Archived from the original on 2 December 2019. Retrieved 1 December 2019. "AI isn't taking over the world – it doesn't exist yet". GBG Global website. Archived from the original on 11 August 2020. Retrieved 22 August 2020. Kaplan, Andreas; Haenlein, Michael (1 January 2019). "Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence". Business Horizons. 62 (1): 15–25. doi:10.1016/j.bushor.2018.08.004. Domingos 2015, Chapter 5. Domingos 2015, Chapter 7. Lindenbaum, M., Markovitch, S., & Rusakov, D. (2004). Selective sampling for nearest neighbor classifiers. Machine learning, 54(2), 125–152. Domingos 2015, Chapter 1. Intractability and efficiency and the combinatorial explosion: * Russell & Norvig 2003, pp. 9, 21–22 Domingos 2015, Chapter 2, Chapter 3. Hart, P. E.; Nilsson, N. J.; Raphael, B. (1972). "Correction to "A Formal Basis for the Heuristic Determination of Minimum Cost Paths"". SIGART Newsletter (37): 28–29. doi:10.1145/1056777.1056779. S2CID 6386648. Domingos 2015, Chapter 2, Chapter 4, Chapter 6. "Can neural network computers learn from experience, and if so, could they ever become what we would call 'smart'?". Scientific American. 2018. Archived from the original on 25 March 2018. Retrieved 24 March 2018. Domingos 2015, Chapter 6, Chapter 7. Domingos 2015, p. 286. "Single pixel change fools AI programs". BBC News. 3 November 2017. Archived from the original on 22 March 2018. Retrieved 12 March 2018. "AI Has a Hallucination Problem That's Proving Tough to Fix". WIRED. 2018. Archived from the original on 12 March 2018. Retrieved 12 March 2018. Matti, D.; Ekenel, H. K.; Thiran, J. P. (2017). Combining LiDAR space clustering and convolutional neural networks for pedestrian detection. 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). pp. 1–6. arXiv:1710.06160. doi:10.1109/AVSS.2017.8078512. ISBN 978-1-5386-2939-0. S2CID 2401976. Ferguson, Sarah; Luders, Brandon; Grande, Robert C.; How, Jonathan P. (2015). Real-Time Predictive Modeling and Robust Avoidance of Pedestrians with Uncertain, Changing Intentions. Algorithmic Foundations of Robotics XI. Springer Tracts in Advanced Robotics. 107. Springer, Cham. pp. 161–177. arXiv:1405.5581. doi:10.1007/978-3-319-16595-0_10. ISBN 978-3-319-16594-3. S2CID 8681101. "Cultivating Common Sense | DiscoverMagazine.com". Discover Magazine. 2017. Archived from the original on 25 March 2018. Retrieved 24 March 2018. Davis, Ernest; Marcus, Gary (24 August 2015). "Commonsense reasoning and commonsense knowledge in artificial intelligence". Communications of the ACM. 58 (9): 92–103. doi:10.1145/2701413. S2CID 13583137. Archived from the original on 22 August 2020. Retrieved 6 April 2020. Winograd, Terry (January 1972). "Understanding natural language". Cognitive Psychology. 3 (1): 1–191. doi:10.1016/0010-0285(72)90002-3. "Don't worry: Autonomous cars aren't coming tomorrow (or next year)". Autoweek. 2016. Archived from the original on 25 March 2018. Retrieved 24 March 2018. Knight, Will (2017). "Boston may be famous for bad drivers, but it's the testing ground for a smarter self-driving car". MIT Technology Review. Archived from the original on 22 August 2020. Retrieved 27 March 2018. Prakken, Henry (31 August 2017). "On the problem of making autonomous vehicles conform to traffic law". Artificial Intelligence and Law. 25 (3): 341–363. doi:10.1007/s10506-017-9210-0. Lieto, Antonio (May 2018). "The knowledge level in cognitive architectures: Current limitations and possible developments". Cognitive Systems Research. 48: 39–55. doi:10.1016/j.cogsys.2017.05.001. hdl:2318/1665207. S2CID 206868967. Problem solving, puzzle solving, game playing and deduction: * Russell & Norvig 2003, chpt. 3–9, * Poole, Mackworth & Goebel 1998, chpt. 2,3,7,9, * Luger & Stubblefield 2004, chpt. 3,4,6,8, * Nilsson 1998, chpt. 7–12 Uncertain reasoning: * Russell & Norvig 2003, pp. 452–644, * Poole, Mackworth & Goebel 1998, pp. 345–395, * Luger & Stubblefield 2004, pp. 333–381, * Nilsson 1998, chpt. 19 Psychological evidence of sub-symbolic reasoning: * Wason & Shapiro (1966) showed that people do poorly on completely abstract problems, but if the problem is restated to allow the use of intuitive social intelligence, performance dramatically improves. (See Wason selection task) * Kahneman, Slovic & Tversky (1982) have shown that people are terrible at elementary problems that involve uncertain reasoning. (See list of cognitive biases for several examples). * Lakoff & Núñez (2000) have controversially argued that even our skills at mathematics depend on knowledge and skills that come from "the body", i.e. sensorimotor and perceptual skills. (See Where Mathematics Comes From) Knowledge representation: * ACM 1998, I.2.4, * Russell & Norvig 2003, pp. 320–363, * Poole, Mackworth & Goebel 1998, pp. 23–46, 69–81, 169–196, 235–277, 281–298, 319–345, * Luger & Stubblefield 2004, pp. 227–243, * Nilsson 1998, chpt. 18 Knowledge engineering: * Russell & Norvig 2003, pp. 260–266, * Poole, Mackworth & Goebel 1998, pp. 199–233, * Nilsson 1998, chpt. ≈17.1–17.4 Representing categories and relations: Semantic networks, description logics, inheritance (including frames and scripts): * Russell & Norvig 2003, pp. 349–354, * Poole, Mackworth & Goebel 1998, pp. 174–177, * Luger & Stubblefield 2004, pp. 248–258, * Nilsson 1998, chpt. 18.3 Representing events and time:Situation calculus, event calculus, fluent calculus (including solving the frame problem): * Russell & Norvig 2003, pp. 328–341, * Poole, Mackworth & Goebel 1998, pp. 281–298, * Nilsson 1998, chpt. 18.2 Causal calculus: * Poole, Mackworth & Goebel 1998, pp. 335–337 Representing knowledge about knowledge: Belief calculus, modal logics: * Russell & Norvig 2003, pp. 341–344, * Poole, Mackworth & Goebel 1998, pp. 275–277 Sikos, Leslie F. (June 2017). Description Logics in Multimedia Reasoning. Cham: Springer. doi:10.1007/978-3-319-54066-5. ISBN 978-3-319-54066-5. S2CID 3180114. Archived from the original on 29 August 2017. Ontology: * Russell & Norvig 2003, pp. 320–328 Smoliar, Stephen W.; Zhang, HongJiang (1994). "Content based video indexing and retrieval". IEEE Multimedia. 1 (2): 62–72. doi:10.1109/93.311653. S2CID 32710913. Neumann, Bernd; Möller, Ralf (January 2008). "On scene interpretation with description logics". Image and Vision Computing. 26 (1): 82–101. doi:10.1016/j.imavis.2007.08.013. Kuperman, G. J.; Reichley, R. M.; Bailey, T. C. (1 July 2006). "Using Commercial Knowledge Bases for Clinical Decision Support: Opportunities, Hurdles, and Recommendations". Journal of the American Medical Informatics Association. 13 (4): 369–371. doi:10.1197/jamia.M2055. PMC 1513681. PMID 16622160. MCGARRY, KEN (1 December 2005). "A survey of interestingness measures for knowledge discovery". The Knowledge Engineering Review. 20 (1): 39–61. doi:10.1017/S0269888905000408. S2CID 14987656. Bertini, M; Del Bimbo, A; Torniai, C (2006). "Automatic annotation and semantic retrieval of video sequences using multimedia ontologies". MM '06 Proceedings of the 14th ACM international conference on Multimedia. 14th ACM international conference on Multimedia. Santa Barbara: ACM. pp. 679–682. Qualification problem: * McCarthy & Hayes 1969 * Russell & Norvig 2003[page needed] While McCarthy was primarily concerned with issues in the logical representation of actions, Russell & Norvig 2003 apply the term to the more general issue of default reasoning in the vast network of assumptions underlying all our commonsense knowledge. Default reasoning and default logic, non-monotonic logics, circumscription, closed world assumption, abduction (Poole et al. places abduction under "default reasoning". Luger et al. places this under "uncertain reasoning"): * Russell & Norvig 2003, pp. 354–360, * Poole, Mackworth & Goebel 1998, pp. 248–256, 323–335, * Luger & Stubblefield 2004, pp. 335–363, * Nilsson 1998, ~18.3.3 Breadth of commonsense knowledge: * Russell & Norvig 2003, p. 21, * Crevier 1993, pp. 113–114, * Moravec 1988, p. 13, * Lenat & Guha 1989 (Introduction) Dreyfus & Dreyfus 1986. Gladwell 2005. Expert knowledge as embodied intuition: * Dreyfus & Dreyfus 1986 (Hubert Dreyfus is a philosopher and critic of AI who was among the first to argue that most useful human knowledge was encoded sub-symbolically. See Dreyfus' critique of AI) * Gladwell 2005 (Gladwell's Blink is a popular introduction to sub-symbolic reasoning and knowledge.) * Hawkins & Blakeslee 2005 (Hawkins argues that sub-symbolic knowledge should be the primary focus of AI research.) Planning: * ACM 1998, ~I.2.8, * Russell & Norvig 2003, pp. 375–459, * Poole, Mackworth & Goebel 1998, pp. 281–316, * Luger & Stubblefield 2004, pp. 314–329, * Nilsson 1998, chpt. 10.1–2, 22 Information value theory: * Russell & Norvig 2003, pp. 600–604 Classical planning: * Russell & Norvig 2003, pp. 375–430, * Poole, Mackworth & Goebel 1998, pp. 281–315, * Luger & Stubblefield 2004, pp. 314–329, * Nilsson 1998, chpt. 10.1–2, 22 Planning and acting in non-deterministic domains: conditional planning, execution monitoring, replanning and continuous planning: * Russell & Norvig 2003, pp. 430–449 Multi-agent planning and emergent behavior: * Russell & Norvig 2003, pp. 449–455 Turing 1950. Solomonoff 1956. Alan Turing discussed the centrality of learning as early as 1950, in his classic paper "Computing Machinery and Intelligence".[120] In 1956, at the original Dartmouth AI summer conference, Ray Solomonoff wrote a report on unsupervised probabilistic machine learning: "An Inductive Inference Machine".[121] This is a form of Tom Mitchell's widely quoted definition of machine learning: "A computer program is set to learn from an experience E with respect to some task T and some performance measure P if its performance on T as measured by P improves with experience E." Learning: * ACM 1998, I.2.6, * Russell & Norvig 2003, pp. 649–788, * Poole, Mackworth & Goebel 1998, pp. 397–438, * Luger & Stubblefield 2004, pp. 385–542, * Nilsson 1998, chpt. 3.3, 10.3, 17.5, 20 Jordan, M. I.; Mitchell, T. M. (16 July 2015). "Machine learning: Trends, perspectives, and prospects". Science. 349 (6245): 255–260. Bibcode:2015Sci...349..255J. doi:10.1126/science.aaa8415. PMID 26185243. S2CID 677218. Reinforcement learning: * Russell & Norvig 2003, pp. 763–788 * Luger & Stubblefield 2004, pp. 442–449 Natural language processing: * ACM 1998, I.2.7 * Russell & Norvig 2003, pp. 790–831 * Poole, Mackworth & Goebel 1998, pp. 91–104 * Luger & Stubblefield 2004, pp. 591–632 "Versatile question answering systems: seeing in synthesis" Archived 1 February 2016 at the Wayback Machine, Mittal et al., IJIIDS, 5(2), 119–142, 2011 Applications of natural language processing, including information retrieval (i.e. text mining) and machine translation: * Russell & Norvig 2003, pp. 840–857, * Luger & Stubblefield 2004, pp. 623–630 Cambria, Erik; White, Bebo (May 2014). "Jumping NLP Curves: A Review of Natural Language Processing Research [Review Article]". IEEE Computational Intelligence Magazine. 9 (2): 48–57. doi:10.1109/MCI.2014.2307227. S2CID 206451986. Vincent, James (7 November 2019). "OpenAI has published the text-generating AI it said was too dangerous to share". The Verge. Archived from the original on 11 June 2020. Retrieved 11 June 2020. Machine perception: * Russell & Norvig 2003, pp. 537–581, 863–898 * Nilsson 1998, ~chpt. 6 Speech recognition: * ACM 1998, ~I.2.7 * Russell & Norvig 2003, pp. 568–578 Object recognition: * Russell & Norvig 2003, pp. 885–892 Computer vision: * ACM 1998, I.2.10 * Russell & Norvig 2003, pp. 863–898 * Nilsson 1998, chpt. 6 Robotics: * ACM 1998, I.2.9, * Russell & Norvig 2003, pp. 901–942, * Poole, Mackworth & Goebel 1998, pp. 443–460 Moving and configuration space: * Russell & Norvig 2003, pp. 916–932 Tecuci 2012. Robotic mapping (localization, etc): * Russell & Norvig 2003, pp. 908–915 Cadena, Cesar; Carlone, Luca; Carrillo, Henry; Latif, Yasir; Scaramuzza, Davide; Neira, Jose; Reid, Ian; Leonard, John J. (December 2016). "Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age". IEEE Transactions on Robotics. 32 (6): 1309–1332. arXiv:1606.05830. Bibcode:2016arXiv160605830C. doi:10.1109/TRO.2016.2624754. S2CID 2596787. Moravec, Hans (1988). Mind Children. Harvard University Press. p. 15. Chan, Szu Ping (15 November 2015). "This is what will happen when robots take over the world". Archived from the original on 24 April 2018. Retrieved 23 April 2018. "IKEA furniture and the limits of AI". The Economist. 2018. Archived from the original on 24 April 2018. Retrieved 24 April 2018. Kismet. Thompson, Derek (2018). "What Jobs Will the Robots Take?". The Atlantic. Archived from the original on 24 April 2018. Retrieved 24 April 2018. Scassellati, Brian (2002). "Theory of mind for a humanoid robot". Autonomous Robots. 12 (1): 13–24. doi:10.1023/A:1013298507114. S2CID 1979315. Cao, Yongcan; Yu, Wenwu; Ren, Wei; Chen, Guanrong (February 2013). "An Overview of Recent Progress in the Study of Distributed Multi-Agent Coordination". IEEE Transactions on Industrial Informatics. 9 (1): 427–438. arXiv:1207.3231. doi:10.1109/TII.2012.2219061. S2CID 9588126. Thro 1993. Edelson 1991. Tao & Tan 2005. Poria, Soujanya; Cambria, Erik; Bajpai, Rajiv; Hussain, Amir (September 2017). "A review of affective computing: From unimodal analysis to multimodal fusion". Information Fusion. 37: 98–125. doi:10.1016/j.inffus.2017.02.003. hdl:1893/25490. Emotion and affective computing: * Minsky 2006 Waddell, Kaveh (2018). "Chatbots Have Entered the Uncanny Valley". The Atlantic. Archived from the original on 24 April 2018. Retrieved 24 April 2018. Pennachin, C.; Goertzel, B. (2007). Contemporary Approaches to Artificial General Intelligence. Artificial General Intelligence. Cognitive Technologies. Cognitive Technologies. Berlin, Heidelberg: Springer. doi:10.1007/978-3-540-68677-4_1. ISBN 978-3-540-23733-4. Roberts, Jacob (2016). "Thinking Machines: The Search for Artificial Intelligence". Distillations. Vol. 2 no. 2. pp. 14–23. Archived from the original on 19 August 2018. Retrieved 20 March 2018. "The superhero of artificial intelligence: can this genius keep it in check?". the Guardian. 16 February 2016. Archived from the original on 23 April 2018. Retrieved 26 April 2018. Mnih, Volodymyr; Kavukcuoglu, Koray; Silver, David; Rusu, Andrei A.; Veness, Joel; Bellemare, Marc G.; Graves, Alex; Riedmiller, Martin; Fidjeland, Andreas K.; Ostrovski, Georg; Petersen, Stig; Beattie, Charles; Sadik, Amir; Antonoglou, Ioannis; King, Helen; Kumaran, Dharshan; Wierstra, Daan; Legg, Shane; Hassabis, Demis (26 February 2015). "Human-level control through deep reinforcement learning". Nature. 518 (7540): 529–533. Bibcode:2015Natur.518..529M. doi:10.1038/nature14236. PMID 25719670. S2CID 205242740. Sample, Ian (14 March 2017). "Google's DeepMind makes AI program that can learn like a human". the Guardian. Archived from the original on 26 April 2018. Retrieved 26 April 2018. "From not working to neural networking". The Economist. 2016. Archived from the original on 31 December 2016. Retrieved 26 April 2018. Domingos 2015. Artificial brain arguments: AI requires a simulation of the operation of the human brain * Russell & Norvig 2003, p. 957 * Crevier 1993, pp. 271 and 279 A few of the people who make some form of the argument: * Moravec 1988 * Kurzweil 2005, p. 262 * Hawkins & Blakeslee 2005 The most extreme form of this argument (the brain replacement scenario) was put forward by Clark Glymour in the mid-1970s and was touched on by Zenon Pylyshyn and John Searle in 1980. Goertzel, Ben; Lian, Ruiting; Arel, Itamar; de Garis, Hugo; Chen, Shuo (December 2010). "A world survey of artificial brain projects, Part II: Biologically inspired cognitive architectures". Neurocomputing. 74 (1–3): 30–49. doi:10.1016/j.neucom.2010.08.012. Nilsson 1983, p. 10. Nils Nilsson writes: "Simply put, there is wide disagreement in the field about what AI is all about."[163] AI's immediate precursors: * McCorduck 2004, pp. 51–107 * Crevier 1993, pp. 27–32 * Russell & Norvig 2003, pp. 15, 940 * Moravec 1988, p. 3 Haugeland 1985, pp. 112–117 The most dramatic case of sub-symbolic AI being pushed into the background was the devastating critique of perceptrons by Marvin Minsky and Seymour Papert in 1969. See History of AI, AI winter, or Frank Rosenblatt. Cognitive simulation, Newell and Simon, AI at CMU (then called Carnegie Tech): * McCorduck 2004, pp. 139–179, 245–250, 322–323 (EPAM) * Crevier 1993, pp. 145–149 Soar (history): * McCorduck 2004, pp. 450–451 * Crevier 1993, pp. 258–263 McCarthy and AI research at SAIL and SRI International: * McCorduck 2004, pp. 251–259 * Crevier 1993 AI research at Edinburgh and in France, birth of Prolog: * Crevier 1993, pp. 193–196 * Howe 1994 AI at MIT under Marvin Minsky in the 1960s : * McCorduck 2004, pp. 259–305 * Crevier 1993, pp. 83–102, 163–176 * Russell & Norvig 2003, p. 19 Cyc: * McCorduck 2004, p. 489, who calls it "a determinedly scruffy enterprise" * Crevier 1993, pp. 239–243 * Russell & Norvig 2003, p. 363−365 * Lenat & Guha 1989 Knowledge revolution: * McCorduck 2004, pp. 266–276, 298–300, 314, 421 * Russell & Norvig 2003, pp. 22–23 Frederick, Hayes-Roth; William, Murray; Leonard, Adelman. "Expert systems". AccessScience. doi:10.1036/1097-8542.248550. Embodied approaches to AI: * McCorduck 2004, pp. 454–462 * Brooks 1990 * Moravec 1988 Weng et al. 2001. Lungarella et al. 2003. Asada et al. 2009. Oudeyer 2010. Revival of connectionism: * Crevier 1993, pp. 214–215 * Russell & Norvig 2003, p. 25 Computational intelligence * IEEE Computational Intelligence Society Archived 9 May 2008 at the Wayback Machine Hutson, Matthew (16 February 2018). "Artificial intelligence faces reproducibility crisis". Science. pp. 725–726. Bibcode:2018Sci...359..725H. doi:10.1126/science.359.6377.725. Archived from the original on 29 April 2018. Retrieved 28 April 2018. Norvig 2012. Langley 2011. Katz 2012. The intelligent agent paradigm: * Russell & Norvig 2003, pp. 27, 32–58, 968–972 * Poole, Mackworth & Goebel 1998, pp. 7–21 * Luger & Stubblefield 2004, pp. 235–240 * Hutter 2005, pp. 125–126 The definition used in this article, in terms of goals, actions, perception and environment, is due to Russell & Norvig (2003). Other definitions also include knowledge and learning as additional criteria. Agent architectures, hybrid intelligent systems: * Russell & Norvig (2003, pp. 27, 932, 970–972) * Nilsson (1998, chpt. 25) Hierarchical control system: * Albus 2002 Lieto, Antonio; Lebiere, Christian; Oltramari, Alessandro (May 2018). "The knowledge level in cognitive architectures: Current limitations and possibile developments". Cognitive Systems Research. 48: 39–55. doi:10.1016/j.cogsys.2017.05.001. hdl:2318/1665207. S2CID 206868967. Lieto, Antonio; Bhatt, Mehul; Oltramari, Alessandro; Vernon, David (May 2018). "The role of cognitive architectures in general artificial intelligence". Cognitive Systems Research. 48: 1–3. doi:10.1016/j.cogsys.2017.08.003. hdl:2318/1665249. S2CID 36189683. Russell & Norvig 2009, p. 1. White Paper: On Artificial Intelligence - A European approach to excellence and trust (PDF). Brussels: European Commission. 2020. p. 1. Archived (PDF) from the original on 20 February 2020. Retrieved 20 February 2020. CNN 2006. Using AI to predict flight delays Archived 20 November 2018 at the Wayback Machine, Ishti.org. N. Aletras; D. Tsarapatsanis; D. Preotiuc-Pietro; V. Lampos (2016). "Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective". PeerJ Computer Science. 2: e93. doi:10.7717/peerj-cs.93. "The Economist Explains: Why firms are piling into artificial intelligence". The Economist. 31 March 2016. Archived from the original on 8 May 2016. Retrieved 19 May 2016. Lohr, Steve (28 February 2016). "The Promise of Artificial Intelligence Unfolds in Small Steps". The New York Times. Archived from the original on 29 February 2016. Retrieved 29 February 2016. Frangoul, Anmar (14 June 2019). "A Californian business is using A.I. to change the way we think about energy storage". CNBC. Archived from the original on 25 July 2020. Retrieved 5 November 2019. Wakefield, Jane (15 June 2016). "Social media 'outstrips TV' as news source for young people". BBC News. Archived from the original on 24 June 2016. Smith, Mark (22 July 2016). "So you think you chose to read this article?". BBC News. Archived from the original on 25 July 2016. Brown, Eileen. "Half of Americans do not believe deepfake news could target them online". ZDNet. Archived from the original on 6 November 2019. Retrieved 3 December 2019. The Turing test: Turing's original publication: * Turing 1950 Historical influence and philosophical implications: * Haugeland 1985, pp. 6–9 * Crevier 1993, p. 24 * McCorduck 2004, pp. 70–71 * Russell & Norvig 2003, pp. 2–3 and 948 Dartmouth proposal: * McCarthy et al. 1955 (the original proposal) * Crevier 1993, p. 49 (historical significance) The physical symbol systems hypothesis: * Newell & Simon 1976, p. 116 * McCorduck 2004, p. 153 * Russell & Norvig 2003, p. 18 Dreyfus 1992, p. 156. Dreyfus criticized the necessary condition of the physical symbol system hypothesis, which he called the "psychological assumption": "The mind can be viewed as a device operating on bits of information according to formal rules."[206] Dreyfus' critique of artificial intelligence: * Dreyfus 1972, Dreyfus & Dreyfus 1986 * Crevier 1993, pp. 120–132 * McCorduck 2004, pp. 211–239 * Russell & Norvig 2003, pp. 950–952, Gödel 1951: in this lecture, Kurt Gödel uses the incompleteness theorem to arrive at the following disjunction: (a) the human mind is not a consistent finite machine, or (b) there exist Diophantine equations for which it cannot decide whether solutions exist. Gödel finds (b) implausible, and thus seems to have believed the human mind was not equivalent to a finite machine, i.e., its power exceeded that of any finite machine. He recognized that this was only a conjecture, since one could never disprove (b). Yet he considered the disjunctive conclusion to be a "certain fact". The Mathematical Objection: * Russell & Norvig 2003, p. 949 * McCorduck 2004, pp. 448–449 Making the Mathematical Objection: * Lucas 1961 * Penrose 1989 Refuting Mathematical Objection: * Turing 1950 under "(2) The Mathematical Objection" * Hofstadter 1979 Background: * Gödel 1931, Church 1936, Kleene 1935, Turing 1937 Graham Oppy (20 January 2015). "Gödel's Incompleteness Theorems". Stanford Encyclopedia of Philosophy. Archived from the original on 22 April 2016. Retrieved 27 April 2016. These Gödelian anti-mechanist arguments are, however, problematic, and there is wide consensus that they fail. Stuart J. Russell; Peter Norvig (2010). "26.1.2: Philosophical Foundations/Weak AI: Can Machines Act Intelligently?/The mathematical objection". Artificial Intelligence: A Modern Approach (3rd ed.). Upper Saddle River, NJ: Prentice Hall. ISBN 978-0-13-604259-4. even if we grant that computers have limitations on what they can prove, there is no evidence that humans are immune from those limitations. Mark Colyvan. An introduction to the philosophy of mathematics. Cambridge University Press, 2012. From 2.2.2, 'Philosophical significance of Gödel's incompleteness results': "The accepted wisdom (with which I concur) is that the Lucas-Penrose arguments fail." Iphofen, Ron; Kritikos, Mihalis (3 January 2019). "Regulating artificial intelligence and robotics: ethics by design in a digital society". Contemporary Social Science: 1–15. doi:10.1080/21582041.2018.1563803. ISSN 2158-2041. "Ethical AI Learns Human Rights Framework". Voice of America. Archived from the original on 11 November 2019. Retrieved 10 November 2019. Crevier 1993, pp. 132–144. In the early 1970s, Kenneth Colby presented a version of Weizenbaum's ELIZA known as DOCTOR which he promoted as a serious therapeutic tool.[216] Joseph Weizenbaum's critique of AI: * Weizenbaum 1976 * Crevier 1993, pp. 132–144 * McCorduck 2004, pp. 356–373 * Russell & Norvig 2003, p. 961 Weizenbaum (the AI researcher who developed the first chatterbot program, ELIZA) argued in 1976 that the misuse of artificial intelligence has the potential to devalue human life. Wendell Wallach (2010). Moral Machines, Oxford University Press. Wallach, pp 37–54. Wallach, pp 55–73. Wallach, Introduction chapter. Michael Anderson and Susan Leigh Anderson (2011), Machine Ethics, Cambridge University Press. "Machine Ethics". aaai.org. Archived from the original on 29 November 2014. Rubin, Charles (Spring 2003). "Artificial Intelligence and Human Nature". The New Atlantis. 1: 88–100. Archived from the original on 11 June 2012. Brooks, Rodney (10 November 2014). "artificial intelligence is a tool, not a threat". Archived from the original on 12 November 2014. "Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence". Observer. 19 August 2015. Archived from the original on 30 October 2015. Retrieved 30 October 2015. Chalmers, David (1995). "Facing up to the problem of consciousness". Journal of Consciousness Studies. 2 (3): 200–219. Archived from the original on 8 March 2005. Retrieved 11 October 2018. See also this link Archived 8 April 2011 at the Wayback Machine Horst, Steven, (2005) "The Computational Theory of Mind" Archived 11 September 2018 at the Wayback Machine in The Stanford Encyclopedia of Philosophy Searle 1980, p. 1. This version is from Searle (1999), and is also quoted in Dennett 1991, p. 435. Searle's original formulation was "The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states." [230] Strong AI is defined similarly by Russell & Norvig (2003, p. 947): "The assertion that machines could possibly act intelligently
34
star
3

Other-sources

Asada, M.; Hosoda, K.; Kuniyoshi, Y.; Ishiguro, H.; Inui, T.; Yoshikawa, Y.; Ogino, M.; Yoshida, C. (2009). "Cognitive developmental robotics: a survey". IEEE Transactions on Autonomous Mental Development. 1 (1): 12–34. doi:10.1109/tamd.2009.2021702. S2CID 10168773. "ACM Computing Classification System: Artificial intelligence". ACM. 1998. Archived from the original on 12 October 2007. Retrieved 30 August 2007. Goodman, Joanna (2016). Robots in Law: How Artificial Intelligence is Transforming Legal Services (1st ed.). Ark Group. ISBN 978-1-78358-264-8. Archived from the original on 8 November 2016. Retrieved 7 November 2016. Albus, J. S. (2002). "4-D/RCS: A Reference Model Architecture for Intelligent Unmanned Ground Vehicles" (PDF). In Gerhart, G.; Gunderson, R.; Shoemaker, C. (eds.). Proceedings of the SPIE AeroSense Session on Unmanned Ground Vehicle Technology. Unmanned Ground Vehicle Technology IV. 3693. pp. 11–20. Bibcode:2002SPIE.4715..303A. CiteSeerX 10.1.1.15.14. doi:10.1117/12.474462. S2CID 63339739. Archived from the original (PDF) on 25 July 2004. Aleksander, Igor (1995). Artificial Neuroconsciousness: An Update. IWANN. Archived from the original on 2 March 1997. BibTex Archived 2 March 1997 at the Wayback Machine. Bach, Joscha (2008). "Seven Principles of Synthetic Intelligence". In Wang, Pei; Goertzel, Ben; Franklin, Stan (eds.). Artificial General Intelligence, 2008: Proceedings of the First AGI Conference. IOS Press. pp. 63–74. ISBN 978-1-58603-833-5. Archived from the original on 8 July 2016. Retrieved 16 February 2016. "Robots could demand legal rights". BBC News. 21 December 2006. Archived from the original on 15 October 2019. Retrieved 3 February 2011. Brooks, Rodney (1990). "Elephants Don't Play Chess" (PDF). Robotics and Autonomous Systems. 6 (1–2): 3–15. CiteSeerX 10.1.1.588.7539. doi:10.1016/S0921-8890(05)80025-9. Archived (PDF) from the original on 9 August 2007. Brooks, R. A. (1991). "How to build complete creatures rather than isolated cognitive simulators". In VanLehn, K. (ed.). Architectures for Intelligence. Hillsdale, NJ: Lawrence Erlbaum Associates. pp. 225–239. CiteSeerX 10.1.1.52.9510. Buchanan, Bruce G. (2005). "A (Very) Brief History of Artificial Intelligence" (PDF). AI Magazine: 53–60. Archived from the original (PDF) on 26 September 2007. Butler, Samuel (13 June 1863). "Darwin among the Machines". Letters to the Editor. The Press. Christchurch, New Zealand. Archived from the original on 19 September 2008. Retrieved 16 October 2014 – via Victoria University of Wellington. Clark, Jack (8 December 2015). "Why 2015 Was a Breakthrough Year in Artificial Intelligence". Bloomberg News. Archived from the original on 23 November 2016. Retrieved 23 November 2016. After a half-decade of quiet breakthroughs in artificial intelligence, 2015 has been a landmark year. Computers are smarter and learning faster than ever. "AI set to exceed human brain power". CNN. 26 July 2006. Archived from the original on 19 February 2008. Dennett, Daniel (1991). Consciousness Explained. The Penguin Press. ISBN 978-0-7139-9037-9. Domingos, Pedro (2015). The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Basic Books. ISBN 978-0-465-06192-1. Dowe, D. L.; Hajek, A. R. (1997). "A computational extension to the Turing Test". Proceedings of the 4th Conference of the Australasian Cognitive Science Society. Archived from the original on 28 June 2011. Dreyfus, Hubert (1972). What Computers Can't Do. New York: MIT Press. ISBN 978-0-06-011082-6. Dreyfus, Hubert; Dreyfus, Stuart (1986). Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. Oxford, UK: Blackwell. ISBN 978-0-02-908060-3. Archived from the original on 26 July 2020. Retrieved 22 August 2020. Dreyfus, Hubert (1992). What Computers Still Can't Do. New York: MIT Press. ISBN 978-0-262-54067-4. Dyson, George (1998). Darwin among the Machines. Allan Lane Science. ISBN 978-0-7382-0030-9. Archived from the original on 26 July 2020. Retrieved 22 August 2020. Edelman, Gerald (23 November 2007). "Gerald Edelman – Neural Darwinism and Brain-based Devices". Talking Robots. Archived from the original on 8 October 2009. Edelson, Edward (1991). The Nervous System. New York: Chelsea House. ISBN 978-0-7910-0464-7. Archived from the original on 26 July 2020. Retrieved 18 November 2019. Fearn, Nicholas (2007). The Latest Answers to the Oldest Questions: A Philosophical Adventure with the World's Greatest Thinkers. New York: Grove Press. ISBN 978-0-8021-1839-4. Gladwell, Malcolm (2005). Blink. New York: Little, Brown and Co. ISBN 978-0-316-17232-5. Gödel, Kurt (1951). Some basic theorems on the foundations of mathematics and their implications. Gibbs Lecture. In Feferman, Solomon, ed. (1995). Kurt Gödel: Collected Works, Vol. III: Unpublished Essays and Lectures. Oxford University Press. pp. 304–23. ISBN 978-0-19-514722-3. Haugeland, John (1985). Artificial Intelligence: The Very Idea. Cambridge, Mass.: MIT Press. ISBN 978-0-262-08153-5. Hawkins, Jeff; Blakeslee, Sandra (2005). On Intelligence. New York, NY: Owl Books. ISBN 978-0-8050-7853-4. Henderson, Mark (24 April 2007). "Human rights for robots? We're getting carried away". The Times Online. London. Archived from the original on 31 May 2014. Retrieved 31 May 2014. Hernandez-Orallo, Jose (2000). "Beyond the Turing Test". Journal of Logic, Language and Information. 9 (4): 447–466. doi:10.1023/A:1008367325700. S2CID 14481982. Hernandez-Orallo, J.; Dowe, D. L. (2010). "Measuring Universal Intelligence: Towards an Anytime Intelligence Test". Artificial Intelligence. 174 (18): 1508–1539. CiteSeerX 10.1.1.295.9079. doi:10.1016/j.artint.2010.09.006. Hinton, G. E. (2007). "Learning multiple layers of representation". Trends in Cognitive Sciences. 11 (10): 428–434. doi:10.1016/j.tics.2007.09.004. PMID 17921042. S2CID 15066318. Hofstadter, Douglas (1979). Gödel, Escher, Bach: an Eternal Golden Braid. New York, NY: Vintage Books. ISBN 978-0-394-74502-2. Holland, John H. (1975). Adaptation in Natural and Artificial Systems. University of Michigan Press. ISBN 978-0-262-58111-0. Archived from the original on 26 July 2020. Retrieved 17 December 2019. Howe, J. (November 1994). "Artificial Intelligence at Edinburgh University: a Perspective". Archived from the original on 15 May 2007. Retrieved 30 August 2007. Hutter, M. (2012). "One Decade of Universal Artificial Intelligence". Theoretical Foundations of Artificial General Intelligence. Atlantis Thinking Machines. 4. pp. 67–88. CiteSeerX 10.1.1.228.8725. doi:10.2991/978-94-91216-62-6_5. ISBN 978-94-91216-61-9. S2CID 8888091. Kahneman, Daniel; Slovic, D.; Tversky, Amos (1982). Judgment under uncertainty: Heuristics and biases. Science. 185. New York: Cambridge University Press. pp. 1124–31. doi:10.1126/science.185.4157.1124. ISBN 978-0-521-28414-1. PMID 17835457. S2CID 143452957. Kaplan, Andreas; Haenlein, Michael (2019). "Siri, Siri in my Hand, who's the Fairest in the Land? On the Interpretations, Illustrations and Implications of Artificial Intelligence". Business Horizons. 62: 15–25. doi:10.1016/j.bushor.2018.08.004. Katz, Yarden (1 November 2012). "Noam Chomsky on Where Artificial Intelligence Went Wrong". The Atlantic. Archived from the original on 28 February 2019. Retrieved 26 October 2014. "Kismet". MIT Artificial Intelligence Laboratory, Humanoid Robotics Group. Archived from the original on 17 October 2014. Retrieved 25 October 2014. Koza, John R. (1992). Genetic Programming (On the Programming of Computers by Means of Natural Selection). MIT Press. Bibcode:1992gppc.book.....K. ISBN 978-0-262-11170-6. Kolata, G. (1982). "How can computers get common sense?". Science. 217 (4566): 1237–1238. Bibcode:1982Sci...217.1237K. doi:10.1126/science.217.4566.1237. PMID 17837639. Kumar, Gulshan; Kumar, Krishan (2012). "The Use of Artificial-Intelligence-Based Ensembles for Intrusion Detection: A Review". Applied Computational Intelligence and Soft Computing. 2012: 1–20. doi:10.1155/2012/850160. Kurzweil, Ray (1999). The Age of Spiritual Machines. Penguin Books. ISBN 978-0-670-88217-5. Kurzweil, Ray (2005). The Singularity is Near. Penguin Books. ISBN 978-0-670-03384-3. Lakoff, George; Núñez, Rafael E. (2000). Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being. Basic Books. ISBN 978-0-465-03771-1. Langley, Pat (2011). "The changing science of machine learning". Machine Learning. 82 (3): 275–279. doi:10.1007/s10994-011-5242-y. Law, Diane (June 1994). Searle, Subsymbolic Functionalism and Synthetic Intelligence (Technical report). University of Texas at Austin. p. AI94-222. CiteSeerX 10.1.1.38.8384. Legg, Shane; Hutter, Marcus (15 June 2007). A Collection of Definitions of Intelligence (Technical report). IDSIA. arXiv:0706.3639. Bibcode:2007arXiv0706.3639L. 07-07. Lenat, Douglas; Guha, R. V. (1989). Building Large Knowledge-Based Systems. Addison-Wesley. ISBN 978-0-201-51752-1. Lighthill, James (1973). "Artificial Intelligence: A General Survey". Artificial Intelligence: a paper symposium. Science Research Council. Lucas, John (1961). "Minds, Machines and Gödel". In Anderson, A.R. (ed.). Minds and Machines. Archived from the original on 19 August 2007. Retrieved 30 August 2007. Lungarella, M.; Metta, G.; Pfeifer, R.; Sandini, G. (2003). "Developmental robotics: a survey". Connection Science. 15 (4): 151–190. CiteSeerX 10.1.1.83.7615. doi:10.1080/09540090310001655110. S2CID 1452734. Maker, Meg Houston (2006). "AI@50: AI Past, Present, Future". Dartmouth College. Archived from the original on 3 January 2007. Retrieved 16 October 2008. Markoff, John (16 February 2011). "Computer Wins on 'Jeopardy!': Trivial, It's Not". The New York Times. Archived from the original on 22 October 2014. Retrieved 25 October 2014. McCarthy, John; Minsky, Marvin; Rochester, Nathan; Shannon, Claude (1955). "A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence". Archived from the original on 26 August 2007. Retrieved 30 August 2007.. McCarthy, John; Hayes, P. J. (1969). "Some philosophical problems from the standpoint of artificial intelligence". Machine Intelligence. 4: 463–502. CiteSeerX 10.1.1.85.5082. Archived from the original on 10 August 2007. Retrieved 30 August 2007. McCarthy, John (12 November 2007). "What Is Artificial Intelligence?". Archived from the original on 18 November 2015. Minsky, Marvin (1967). Computation: Finite and Infinite Machines. Englewood Cliffs, N.J.: Prentice-Hall. ISBN 978-0-13-165449-5. Archived from the original on 26 July 2020. Retrieved 18 November 2019. Minsky, Marvin (2006). The Emotion Machine. New York, NY: Simon & Schusterl. ISBN 978-0-7432-7663-4. Moravec, Hans (1988). Mind Children. Harvard University Press. ISBN 978-0-674-57616-2. Archived from the original on 26 July 2020. Retrieved 18 November 2019. Norvig, Peter (25 June 2012). "On Chomsky and the Two Cultures of Statistical Learning". Peter Norvig. Archived from the original on 19 October 2014. NRC (United States National Research Council) (1999). "Developments in Artificial Intelligence". Funding a Revolution: Government Support for Computing Research. National Academy Press. Needham, Joseph (1986). Science and Civilization in China: Volume 2. Caves Books Ltd. Newell, Allen; Simon, H. A. (1976). "Computer Science as Empirical Inquiry: Symbols and Search". Communications of the ACM. 19 (3): 113–126. doi:10.1145/360018.360022.. Nilsson, Nils (1983). "Artificial Intelligence Prepares for 2001" (PDF). AI Magazine. 1 (1). Archived (PDF) from the original on 17 August 2020. Retrieved 22 August 2020. Presidential Address to the Association for the Advancement of Artificial Intelligence. O'Brien, James; Marakas, George (2011). Management Information Systems (10th ed.). McGraw-Hill/Irwin. ISBN 978-0-07-337681-3. O'Connor, Kathleen Malone (1994). "The alchemical creation of life (takwin) and other concepts of Genesis in medieval Islam". University of Pennsylvania: 1–435. Archived from the original on 5 December 2019. Retrieved 27 August 2008. Oudeyer, P-Y. (2010). "On the impact of robotics in behavioral and cognitive sciences: from insect navigation to human cognitive development" (PDF). IEEE Transactions on Autonomous Mental Development. 2 (1): 2–16. doi:10.1109/tamd.2009.2039057. S2CID 6362217. Archived (PDF) from the original on 3 October 2018. Retrieved 4 June 2013. Penrose, Roger (1989). The Emperor's New Mind: Concerning Computer, Minds and The Laws of Physics. Oxford University Press. ISBN 978-0-19-851973-7. Poli, R.; Langdon, W. B.; McPhee, N. F. (2008). A Field Guide to Genetic Programming. Lulu.com. ISBN 978-1-4092-0073-4. Archived from the original on 8 August 2015. Retrieved 21 April 2008 – via gp-field-guide.org.uk. Rajani, Sandeep (2011). "Artificial Intelligence – Man or Machine" (PDF). International Journal of Information Technology and Knowledge Management. 4 (1): 173–176. Archived from the original (PDF) on 18 January 2013. Ronald, E. M. A. and Sipper, M. Intelligence is not enough: On the socialization of talking machines, Minds and Machines Archived 25 July 2020 at the Wayback Machine, vol. 11, no. 4, pp. 567–576, November 2001. Ronald, E. M. A. and Sipper, M. What use is a Turing chatterbox? Archived 25 July 2020 at the Wayback Machine, Communications of the ACM, vol. 43, no. 10, pp. 21–23, October 2000. "Science". August 1982. Archived from the original on 25 July 2020. Retrieved 16 February 2016. Searle, John (1980). "Minds, Brains and Programs" (PDF). Behavioral and Brain Sciences. 3 (3): 417–457. doi:10.1017/S0140525X00005756. Archived (PDF) from the original on 17 March 2019. Retrieved 22 August 2020. Searle, John (1999). Mind, language and society. New York, NY: Basic Books. ISBN 978-0-465-04521-1. OCLC 231867665. Archived from the original on 26 July 2020. Retrieved 22 August 2020. Shapiro, Stuart C. (1992). "Artificial Intelligence". In Shapiro, Stuart C. (ed.). Encyclopedia of Artificial Intelligence (PDF) (2nd ed.). New York: John Wiley. pp. 54–57. ISBN 978-0-471-50306-4. Archived (PDF) from the original on 1 February 2016. Retrieved 29 May 2009. Simon, H. A. (1965). The Shape of Automation for Men and Management. New York: Harper & Row. Archived from the original on 26 July 2020. Retrieved 18 November 2019. Skillings, Jonathan (3 July 2006). "Getting Machines to Think Like Us". cnet. Archived from the original on 16 November 2011. Retrieved 3 February 2011. Solomonoff, Ray (1956). An Inductive Inference Machine (PDF). Dartmouth Summer Research Conference on Artificial Intelligence. Archived (PDF) from the original on 26 April 2011. Retrieved 22 March 2011 – via std.com, pdf scanned copy of the original. Later published as Solomonoff, Ray (1957). "An Inductive Inference Machine". IRE Convention Record. Section on Information Theory, part 2. pp. 56–62. Tao, Jianhua; Tan, Tieniu (2005). Affective Computing and Intelligent Interaction. Affective Computing: A Review. LNCS 3784. Springer. pp. 981–995. doi:10.1007/11573548. Tecuci, Gheorghe (March–April 2012). "Artificial Intelligence". Wiley Interdisciplinary Reviews: Computational Statistics. 4 (2): 168–180. doi:10.1002/wics.200. Thro, Ellen (1993). Robotics: The Marriage of Computers and Machines. New York: Facts on File. ISBN 978-0-8160-2628-9. Archived from the original on 26 July 2020. Retrieved 22 August 2020. Turing, Alan (October 1950), "Computing Machinery and Intelligence", Mind, LIX (236): 433–460, doi:10.1093/mind/LIX.236.433, ISSN 0026-4423. van der Walt, Christiaan; Bernard, Etienne (2006). "Data characteristics that determine classifier performance" (PDF). Archived from the original (PDF) on 25 March 2009. Retrieved 5 August 2009. Vinge, Vernor (1993). "The Coming Technological Singularity: How to Survive in the Post-Human Era". Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace: 11. Bibcode:1993vise.nasa...11V. Archived from the original on 1 January 2007. Retrieved 14 November 2011. Wason, P. C.; Shapiro, D. (1966). "Reasoning". In Foss, B. M. (ed.). New horizons in psychology. Harmondsworth: Penguin. Archived from the original on 26 July 2020. Retrieved 18 November 2019. Weizenbaum, Joseph (1976). Computer Power and Human Reason. San Francisco: W.H. Freeman & Company. ISBN 978-0-7167-0464-5. Weng, J.; McClelland; Pentland, A.; Sporns, O.; Stockman, I.; Sur, M.; Thelen, E. (2001). "Autonomous mental development by robots and animals" (PDF). Science. 291 (5504): 599–600. doi:10.1126/science.291.5504.599. PMID 11229402. S2CID 54131797. Archived (PDF) from the original on 4 September 2013. Retrieved 4 June 2013 – via msu.edu. "Applications of AI". www-formal.stanford.edu. Archived from the original on 28 August 2016. Retrieved 25 September 2016. Further reading DH Author, 'Why Are There Still So Many Jobs? The History and Future of Workplace Automation' (2015) 29(3) Journal of Economic Perspectives 3. Boden, Margaret, Mind As Machine, Oxford University Press, 2006. Cukier, Kenneth, "Ready for Robots? How to Think about the Future of AI", Foreign Affairs, vol. 98, no. 4 (July/August 2019), pp. 192–98. George Dyson, historian of computing, writes (in what might be called "Dyson's Law") that "Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand." (p. 197.) Computer scientist Alex Pentland writes: "Current AI machine-learning algorithms are, at their core, dead simple stupid. They work, but they work by brute force." (p. 198.) Domingos, Pedro, "Our Digital Doubles: AI will serve our species, not control it", Scientific American, vol. 319, no. 3 (September 2018), pp. 88–93. Gopnik, Alison, "Making AI More Human: Artificial intelligence has staged a revival by starting to incorporate what we know about how children learn", Scientific American, vol. 316, no. 6 (June 2017), pp. 60–65. Johnston, John (2008) The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI, MIT Press. Koch, Christof, "Proust among the Machines", Scientific American, vol. 321, no. 6 (December 2019), pp. 46–49. Christof Koch doubts the possibility of "intelligent" machines attaining consciousness, because "[e]ven the most sophisticated brain simulations are unlikely to produce conscious feelings." (p. 48.) According to Koch, "Whether machines can become sentient [is important] for ethical reasons. If computers experience life through their own senses, they cease to be purely a means to an end determined by their usefulness to... humans. Per GNW [the Global Neuronal Workspace theory], they turn from mere objects into subjects... with a point of view.... Once computers' cognitive abilities rival those of humanity, their impulse to push for legal and political rights will become irresistible – the right not to be deleted, not to have their memories wiped clean, not to suffer pain and degradation. The alternative, embodied by IIT [Integrated Information Theory], is that computers will remain only supersophisticated machinery, ghostlike empty shells, devoid of what we value most: the feeling of life itself." (p. 49.) Marcus, Gary, "Am I Human?: Researchers need new ways to distinguish artificial intelligence from the natural kind", Scientific American, vol. 316, no. 3 (March 2017), pp. 58–63. A stumbling block to AI has been an incapacity for reliable disambiguation. An example is the "pronoun disambiguation problem": a machine has no way of determining to whom or what a pronoun in a sentence refers. (p. 61.) E McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy' (2018) SSRN, part 2(3) Archived 24 May 2018 at the Wayback Machine. George Musser, "Artificial Imagination: How machines could learn creativity and common sense, among other human qualities", Scientific American, vol. 320, no. 5 (May 2019), pp. 58–63. Myers, Courtney Boyd ed. (2009). "The AI Report" Archived 29 July 2017 at the Wayback Machine. Forbes June 2009 Raphael, Bertram (1976). The Thinking Computer. W.H.Freeman and Company. ISBN 978-0-7167-0723-3. Archived from the original on 26 July 2020. Retrieved 22 August 2020. Scharre, Paul, "Killer Apps: The Real Dangers of an AI Arms Race", Foreign Affairs, vol. 98, no. 3 (May/June 2019), pp. 135–44. "Today's AI technologies are powerful but unreliable. Rules-based systems cannot deal with circumstances their programmers did not anticipate. Learning systems are limited by the data on which they were trained. AI failures have already led to tragedy. Advanced autopilot features in cars, although they perform well in some circumstances, have driven cars without warning into trucks, concrete barriers, and parked cars. In the wrong situation, AI systems go from supersmart to superdumb in an instant. When an enemy is trying to manipulate and hack an AI system, the risks are even greater." (p. 140.) Serenko, Alexander (2010). "The development of an AI journal ranking based on the revealed preference approach" (PDF). Journal of Informetrics. 4 (4): 447–459. doi:10.1016/j.joi.2010.04.001. Archived (PDF) from the original on 4 October 2013. Retrieved 24 August 2013. Serenko, Alexander; Michael Dohan (2011). "Comparing the expert survey and citation impact journal ranking methods: Example from the field of Artificial Intelligence" (PDF). Journal of Informetrics. 5 (4): 629–649. doi:10.1016/j.joi.2011.06.002. Archived (PDF) from the original on 4 October 2013. Retrieved 12 September 2013. Sun, R. & Bookman, L. (eds.), Computational Architectures: Integrating Neural and Symbolic Processes. Kluwer Academic Publishers, Needham, MA. 1994. Tom Simonite (29 December 2014). "2014 in Computing: Breakthroughs in Artificial Intelligence". MIT Technology Review. Tooze, Adam, "Democracy and Its Discontents", The New York Review of Books, vol. LXVI, no. 10 (6 June 2019), pp. 52–53, 56–57. "Democracy has no clear answer for the mindless operation of bureaucratic and technological power. We may indeed be witnessing its extension in the form of artificial intelligence and robotics. Likewise, after decades of dire warning, the environmental problem remains fundamentally unaddressed.... Bureaucratic overreach and environmental catastrophe are precisely the kinds of slow-moving existential challenges that democracies deal with very badly.... Finally, there is the threat du jour: corporations and the technologies they promote." (pp. 56–57.)
16
star
4

SQL-Injection-attacks

SQL Injection attack is the most common website hacking technique. Most websites use Structured Query Language (SQL) to interact with databases. SQL allows the website to create, retrieve, update, and delete database records. It used for everything from logging a user into the website to storing details of an eCommerce transaction. An SQL injection attack places SQL into a web form in an attempt to get the application to run it. For example, instead of typing plain text into a username or password field, a hacker may type in ‘ OR 1=1. If the application appends this string directly to an SQL command that is designed to check if a user exists in the database, it will always return true. This can allow a hacker to gain access to a restricted section of a website. Other SQL injection attacks can be used to delete data from the database or insert new data. Hackers sometimes use automated tools to perform SQL injections on remote websites. They will scan thousands of websites, testing many types of injection attacks until they are successful. SQL injection attacks can be prevented by correctly filtering user input. Most programming languages have special functions to safely handle user input that is going to be used in an SQL query.
13
star
5

numpy

Quickstart tutorial Prerequisites Before reading this tutorial you should know a bit of Python. If you would like to refresh your memory, take a look at the Python tutorial. If you wish to work the examples in this tutorial, you must also have some software installed on your computer. Please see https://scipy.org/install.html for instructions. Learner profile This tutorial is intended as a quick overview of algebra and arrays in NumPy and want to understand how n-dimensional (n>=2) arrays are represented and can be manipulated. In particular, if you don’t know how to apply common functions to n-dimensional arrays (without using for-loops), or if you want to understand axis and shape properties for n-dimensional arrays, this tutorial might be of help. Learning Objectives After this tutorial, you should be able to: Understand the difference between one-, two- and n-dimensional arrays in NumPy; Understand how to apply some linear algebra operations to n-dimensional arrays without using for-loops; Understand axis and shape properties for n-dimensional arrays. The Basics NumPy’s main object is the homogeneous multidimensional array. It is a table of elements (usually numbers), all of the same type, indexed by a tuple of non-negative integers. In NumPy dimensions are called axes. For example, the coordinates of a point in 3D space [1, 2, 1] has one axis. That axis has 3 elements in it, so we say it has a length of 3. In the example pictured below, the array has 2 axes. The first axis has a length of 2, the second axis has a length of 3. [[ 1., 0., 0.], [ 0., 1., 2.]] NumPy’s array class is called ndarray. It is also known by the alias array. Note that numpy.array is not the same as the Standard Python Library class array.array, which only handles one-dimensional arrays and offers less functionality. The more important attributes of an ndarray object are: ndarray.ndim the number of axes (dimensions) of the array. ndarray.shape the dimensions of the array. This is a tuple of integers indicating the size of the array in each dimension. For a matrix with n rows and m columns, shape will be (n,m). The length of the shape tuple is therefore the number of axes, ndim. ndarray.size the total number of elements of the array. This is equal to the product of the elements of shape. ndarray.dtype an object describing the type of the elements in the array. One can create or specify dtype’s using standard Python types. Additionally NumPy provides types of its own. numpy.int32, numpy.int16, and numpy.float64 are some examples. ndarray.itemsize the size in bytes of each element of the array. For example, an array of elements of type float64 has itemsize 8 (=64/8), while one of type complex32 has itemsize 4 (=32/8). It is equivalent to ndarray.dtype.itemsize. ndarray.data the buffer containing the actual elements of the array. Normally, we won’t need to use this attribute because we will access the elements in an array using indexing facilities. An example >>> import numpy as np a = np.arange(15).reshape(3, 5) a array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]]) a.shape (3, 5) a.ndim 2 a.dtype.name 'int64' a.itemsize 8 a.size 15 type(a) <class 'numpy.ndarray'> b = np.array([6, 7, 8]) b array([6, 7, 8]) type(b) <class 'numpy.ndarray'> Array Creation There are several ways to create arrays. For example, you can create an array from a regular Python list or tuple using the array function. The type of the resulting array is deduced from the type of the elements in the sequences. >>> >>> import numpy as np >>> a = np.array([2,3,4]) >>> a array([2, 3, 4]) >>> a.dtype dtype('int64') >>> b = np.array([1.2, 3.5, 5.1]) >>> b.dtype dtype('float64') A frequent error consists in calling array with multiple arguments, rather than providing a single sequence as an argument. >>> >>> a = np.array(1,2,3,4) # WRONG Traceback (most recent call last): ... TypeError: array() takes from 1 to 2 positional arguments but 4 were given >>> a = np.array([1,2,3,4]) # RIGHT array transforms sequences of sequences into two-dimensional arrays, sequences of sequences of sequences into three-dimensional arrays, and so on. >>> >>> b = np.array([(1.5,2,3), (4,5,6)]) >>> b array([[1.5, 2. , 3. ], [4. , 5. , 6. ]]) The type of the array can also be explicitly specified at creation time: >>> >>> c = np.array( [ [1,2], [3,4] ], dtype=complex ) >>> c array([[1.+0.j, 2.+0.j], [3.+0.j, 4.+0.j]]) Often, the elements of an array are originally unknown, but its size is known. Hence, NumPy offers several functions to create arrays with initial placeholder content. These minimize the necessity of growing arrays, an expensive operation. The function zeros creates an array full of zeros, the function ones creates an array full of ones, and the function empty creates an array whose initial content is random and depends on the state of the memory. By default, the dtype of the created array is float64. >>> >>> np.zeros((3, 4)) array([[0., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.]]) >>> np.ones( (2,3,4), dtype=np.int16 ) # dtype can also be specified array([[[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]], [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]], dtype=int16) >>> np.empty( (2,3) ) # uninitialized array([[ 3.73603959e-262, 6.02658058e-154, 6.55490914e-260], # may vary [ 5.30498948e-313, 3.14673309e-307, 1.00000000e+000]]) To create sequences of numbers, NumPy provides the arange function which is analogous to the Python built-in range, but returns an array. >>> >>> np.arange( 10, 30, 5 ) array([10, 15, 20, 25]) >>> np.arange( 0, 2, 0.3 ) # it accepts float arguments array([0. , 0.3, 0.6, 0.9, 1.2, 1.5, 1.8]) When arange is used with floating point arguments, it is generally not possible to predict the number of elements obtained, due to the finite floating point precision. For this reason, it is usually better to use the function linspace that receives as an argument the number of elements that we want, instead of the step: >>> >>> from numpy import pi >>> np.linspace( 0, 2, 9 ) # 9 numbers from 0 to 2 array([0. , 0.25, 0.5 , 0.75, 1. , 1.25, 1.5 , 1.75, 2. ]) >>> x = np.linspace( 0, 2*pi, 100 ) # useful to evaluate function at lots of points >>> f = np.sin(x) See also array, zeros, zeros_like, ones, ones_like, empty, empty_like, arange, linspace, numpy.random.Generator.rand, numpy.random.Generator.randn, fromfunction, fromfile Printing Arrays When you print an array, NumPy displays it in a similar way to nested lists, but with the following layout: the last axis is printed from left to right, the second-to-last is printed from top to bottom, the rest are also printed from top to bottom, with each slice separated from the next by an empty line. One-dimensional arrays are then printed as rows, bidimensionals as matrices and tridimensionals as lists of matrices. >>> >>> a = np.arange(6) # 1d array >>> print(a) [0 1 2 3 4 5] >>> >>> b = np.arange(12).reshape(4,3) # 2d array >>> print(b) [[ 0 1 2] [ 3 4 5] [ 6 7 8] [ 9 10 11]] >>> >>> c = np.arange(24).reshape(2,3,4) # 3d array >>> print(c) [[[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]] [[12 13 14 15] [16 17 18 19] [20 21 22 23]]] See below to get more details on reshape. If an array is too large to be printed, NumPy automatically skips the central part of the array and only prints the corners: >>> >>> print(np.arange(10000)) [ 0 1 2 ... 9997 9998 9999] >>> >>> print(np.arange(10000).reshape(100,100)) [[ 0 1 2 ... 97 98 99] [ 100 101 102 ... 197 198 199] [ 200 201 202 ... 297 298 299] ... [9700 9701 9702 ... 9797 9798 9799] [9800 9801 9802 ... 9897 9898 9899] [9900 9901 9902 ... 9997 9998 9999]] To disable this behaviour and force NumPy to print the entire array, you can change the printing options using set_printoptions. >>> >>> np.set_printoptions(threshold=sys.maxsize) # sys module should be imported Basic Operations Arithmetic operators on arrays apply elementwise. A new array is created and filled with the result. >>> >>> a = np.array( [20,30,40,50] ) >>> b = np.arange( 4 ) >>> b array([0, 1, 2, 3]) >>> c = a-b >>> c array([20, 29, 38, 47]) >>> b**2 array([0, 1, 4, 9]) >>> 10*np.sin(a) array([ 9.12945251, -9.88031624, 7.4511316 , -2.62374854]) >>> a<35 array([ True, True, False, False]) Unlike in many matrix languages, the product operator * operates elementwise in NumPy arrays. The matrix product can be performed using the @ operator (in python >=3.5) or the dot function or method: >>> >>> A = np.array( [[1,1], ... [0,1]] ) >>> B = np.array( [[2,0], ... [3,4]] ) >>> A * B # elementwise product array([[2, 0], [0, 4]]) >>> A @ B # matrix product array([[5, 4], [3, 4]]) >>> A.dot(B) # another matrix product array([[5, 4], [3, 4]]) Some operations, such as += and *=, act in place to modify an existing array rather than create a new one. >>> >>> rg = np.random.default_rng(1) # create instance of default random number generator >>> a = np.ones((2,3), dtype=int) >>> b = rg.random((2,3)) >>> a *= 3 >>> a array([[3, 3, 3], [3, 3, 3]]) >>> b += a >>> b array([[3.51182162, 3.9504637 , 3.14415961], [3.94864945, 3.31183145, 3.42332645]]) >>> a += b # b is not automatically converted to integer type Traceback (most recent call last): ... numpy.core._exceptions.UFuncTypeError: Cannot cast ufunc 'add' output from dtype('float64') to dtype('int64') with casting rule 'same_kind' When operating with arrays of different types, the type of the resulting array corresponds to the more general or precise one (a behavior known as upcasting). >>> >>> a = np.ones(3, dtype=np.int32) >>> b = np.linspace(0,pi,3) >>> b.dtype.name 'float64' >>> c = a+b >>> c array([1. , 2.57079633, 4.14159265]) >>> c.dtype.name 'float64' >>> d = np.exp(c*1j) >>> d array([ 0.54030231+0.84147098j, -0.84147098+0.54030231j, -0.54030231-0.84147098j]) >>> d.dtype.name 'complex128' Many unary operations, such as computing the sum of all the elements in the array, are implemented as methods of the ndarray class. >>> >>> a = rg.random((2,3)) >>> a array([[0.82770259, 0.40919914, 0.54959369], [0.02755911, 0.75351311, 0.53814331]]) >>> a.sum() 3.1057109529998157 >>> a.min() 0.027559113243068367 >>> a.max() 0.8277025938204418 By default, these operations apply to the array as though it were a list of numbers, regardless of its shape. However, by specifying the axis parameter you can apply an operation along the specified axis of an array: >>> >>> b = np.arange(12).reshape(3,4) >>> b array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> >>> b.sum(axis=0) # sum of each column array([12, 15, 18, 21]) >>> >>> b.min(axis=1) # min of each row array([0, 4, 8]) >>> >>> b.cumsum(axis=1) # cumulative sum along each row array([[ 0, 1, 3, 6], [ 4, 9, 15, 22], [ 8, 17, 27, 38]]) Universal Functions NumPy provides familiar mathematical functions such as sin, cos, and exp. In NumPy, these are called “universal functions”(ufunc). Within NumPy, these functions operate elementwise on an array, producing an array as output. >>> >>> B = np.arange(3) >>> B array([0, 1, 2]) >>> np.exp(B) array([1. , 2.71828183, 7.3890561 ]) >>> np.sqrt(B) array([0. , 1. , 1.41421356]) >>> C = np.array([2., -1., 4.]) >>> np.add(B, C) array([2., 0., 6.]) See also all, any, apply_along_axis, argmax, argmin, argsort, average, bincount, ceil, clip, conj, corrcoef, cov, cross, cumprod, cumsum, diff, dot, floor, inner, invert, lexsort, max, maximum, mean, median, min, minimum, nonzero, outer, prod, re, round, sort, std, sum, trace, transpose, var, vdot, vectorize, where Indexing, Slicing and Iterating One-dimensional arrays can be indexed, sliced and iterated over, much like lists and other Python sequences. >>> >>> a = np.arange(10)**3 >>> a array([ 0, 1, 8, 27, 64, 125, 216, 343, 512, 729]) >>> a[2] 8 >>> a[2:5] array([ 8, 27, 64]) # equivalent to a[0:6:2] = 1000; # from start to position 6, exclusive, set every 2nd element to 1000 >>> a[:6:2] = 1000 >>> a array([1000, 1, 1000, 27, 1000, 125, 216, 343, 512, 729]) >>> a[ : :-1] # reversed a array([ 729, 512, 343, 216, 125, 1000, 27, 1000, 1, 1000]) >>> for i in a: ... print(i**(1/3.)) ... 9.999999999999998 1.0 9.999999999999998 3.0 9.999999999999998 4.999999999999999 5.999999999999999 6.999999999999999 7.999999999999999 8.999999999999998 Multidimensional arrays can have one index per axis. These indices are given in a tuple separated by commas: >>> >>> def f(x,y): ... return 10*x+y ... >>> b = np.fromfunction(f,(5,4),dtype=int) >>> b array([[ 0, 1, 2, 3], [10, 11, 12, 13], [20, 21, 22, 23], [30, 31, 32, 33], [40, 41, 42, 43]]) >>> b[2,3] 23 >>> b[0:5, 1] # each row in the second column of b array([ 1, 11, 21, 31, 41]) >>> b[ : ,1] # equivalent to the previous example array([ 1, 11, 21, 31, 41]) >>> b[1:3, : ] # each column in the second and third row of b array([[10, 11, 12, 13], [20, 21, 22, 23]]) When fewer indices are provided than the number of axes, the missing indices are considered complete slices: >>> >>> b[-1] # the last row. Equivalent to b[-1,:] array([40, 41, 42, 43]) The expression within brackets in b[i] is treated as an i followed by as many instances of : as needed to represent the remaining axes. NumPy also allows you to write this using dots as b[i,...]. The dots (...) represent as many colons as needed to produce a complete indexing tuple. For example, if x is an array with 5 axes, then x[1,2,...] is equivalent to x[1,2,:,:,:], x[...,3] to x[:,:,:,:,3] and x[4,...,5,:] to x[4,:,:,5,:]. >>> >>> c = np.array( [[[ 0, 1, 2], # a 3D array (two stacked 2D arrays) ... [ 10, 12, 13]], ... [[100,101,102], ... [110,112,113]]]) >>> c.shape (2, 2, 3) >>> c[1,...] # same as c[1,:,:] or c[1] array([[100, 101, 102], [110, 112, 113]]) >>> c[...,2] # same as c[:,:,2] array([[ 2, 13], [102, 113]]) Iterating over multidimensional arrays is done with respect to the first axis: >>> >>> for row in b: ... print(row) ... [0 1 2 3] [10 11 12 13] [20 21 22 23] [30 31 32 33] [40 41 42 43] However, if one wants to perform an operation on each element in the array, one can use the flat attribute which is an iterator over all the elements of the array: >>> >>> for element in b.flat: ... print(element) ... 0 1 2 3 10 11 12 13 20 21 22 23 30 31 32 33 40 41 42 43 See also Indexing, Indexing (reference), newaxis, ndenumerate, indices Shape Manipulation Changing the shape of an array An array has a shape given by the number of elements along each axis: >>> >>> a = np.floor(10*rg.random((3,4))) >>> a array([[3., 7., 3., 4.], [1., 4., 2., 2.], [7., 2., 4., 9.]]) >>> a.shape (3, 4) The shape of an array can be changed with various commands. Note that the following three commands all return a modified array, but do not change the original array: >>> >>> a.ravel() # returns the array, flattened array([3., 7., 3., 4., 1., 4., 2., 2., 7., 2., 4., 9.]) >>> a.reshape(6,2) # returns the array with a modified shape array([[3., 7.], [3., 4.], [1., 4.], [2., 2.], [7., 2.], [4., 9.]]) >>> a.T # returns the array, transposed array([[3., 1., 7.], [7., 4., 2.], [3., 2., 4.], [4., 2., 9.]]) >>> a.T.shape (4, 3) >>> a.shape (3, 4) The order of the elements in the array resulting from ravel() is normally “C-style”, that is, the rightmost index “changes the fastest”, so the element after a[0,0] is a[0,1]. If the array is reshaped to some other shape, again the array is treated as “C-style”. NumPy normally creates arrays stored in this order, so ravel() will usually not need to copy its argument, but if the array was made by taking slices of another array or created with unusual options, it may need to be copied. The functions ravel() and reshape() can also be instructed, using an optional argument, to use FORTRAN-style arrays, in which the leftmost index changes the fastest. The reshape function returns its argument with a modified shape, whereas the ndarray.resize method modifies the array itself: >>> >>> a array([[3., 7., 3., 4.], [1., 4., 2., 2.], [7., 2., 4., 9.]]) >>> a.resize((2,6)) >>> a array([[3., 7., 3., 4., 1., 4.], [2., 2., 7., 2., 4., 9.]]) If a dimension is given as -1 in a reshaping operation, the other dimensions are automatically calculated: >>> >>> a.reshape(3,-1) array([[3., 7., 3., 4.], [1., 4., 2., 2.], [7., 2., 4., 9.]]) See also ndarray.shape, reshape, resize, ravel Stacking together different arrays Several arrays can be stacked together along different axes: >>> >>> a = np.floor(10*rg.random((2,2))) >>> a array([[9., 7.], [5., 2.]]) >>> b = np.floor(10*rg.random((2,2))) >>> b array([[1., 9.], [5., 1.]]) >>> np.vstack((a,b)) array([[9., 7.], [5., 2.], [1., 9.], [5., 1.]]) >>> np.hstack((a,b)) array([[9., 7., 1., 9.], [5., 2., 5., 1.]]) The function column_stack stacks 1D arrays as columns into a 2D array. It is equivalent to hstack only for 2D arrays: >>> >>> from numpy import newaxis >>> np.column_stack((a,b)) # with 2D arrays array([[9., 7., 1., 9.], [5., 2., 5., 1.]]) >>> a = np.array([4.,2.]) >>> b = np.array([3.,8.]) >>> np.column_stack((a,b)) # returns a 2D array array([[4., 3.], [2., 8.]]) >>> np.hstack((a,b)) # the result is different array([4., 2., 3., 8.]) >>> a[:,newaxis] # view `a` as a 2D column vector array([[4.], [2.]]) >>> np.column_stack((a[:,newaxis],b[:,newaxis])) array([[4., 3.], [2., 8.]]) >>> np.hstack((a[:,newaxis],b[:,newaxis])) # the result is the same array([[4., 3.], [2., 8.]]) On the other hand, the function row_stack is equivalent to vstack for any input arrays. In fact, row_stack is an alias for vstack: >>> >>> np.column_stack is np.hstack False >>> np.row_stack is np.vstack True In general, for arrays with more than two dimensions, hstack stacks along their second axes, vstack stacks along their first axes, and concatenate allows for an optional arguments giving the number of the axis along which the concatenation should happen. Note In complex cases, r_ and c_ are useful for creating arrays by stacking numbers along one axis. They allow the use of range literals (“:”) >>> >>> np.r_[1:4,0,4] array([1, 2, 3, 0, 4]) When used with arrays as arguments, r_ and c_ are similar to vstack and hstack in their default behavior, but allow for an optional argument giving the number of the axis along which to concatenate. See also hstack, vstack, column_stack, concatenate, c_, r_ Splitting one array into several smaller ones Using hsplit, you can split an array along its horizontal axis, either by specifying the number of equally shaped arrays to return, or by specifying the columns after which the division should occur: >>> >>> a = np.floor(10*rg.random((2,12))) >>> a array([[6., 7., 6., 9., 0., 5., 4., 0., 6., 8., 5., 2.], [8., 5., 5., 7., 1., 8., 6., 7., 1., 8., 1., 0.]]) # Split a into 3 >>> np.hsplit(a,3) [array([[6., 7., 6., 9.], [8., 5., 5., 7.]]), array([[0., 5., 4., 0.], [1., 8., 6., 7.]]), array([[6., 8., 5., 2.], [1., 8., 1., 0.]])] # Split a after the third and the fourth column >>> np.hsplit(a,(3,4)) [array([[6., 7., 6.], [8., 5., 5.]]), array([[9.], [7.]]), array([[0., 5., 4., 0., 6., 8., 5., 2.], [1., 8., 6., 7., 1., 8., 1., 0.]])] vsplit splits along the vertical axis, and array_split allows one to specify along which axis to split. Copies and Views When operating and manipulating arrays, their data is sometimes copied into a new array and sometimes not. This is often a source of confusion for beginners. There are three cases: No Copy at All Simple assignments make no copy of objects or their data. >>> >>> a = np.array([[ 0, 1, 2, 3], ... [ 4, 5, 6, 7], ... [ 8, 9, 10, 11]]) >>> b = a # no new object is created >>> b is a # a and b are two names for the same ndarray object True Python passes mutable objects as references, so function calls make no copy. >>> >>> def f(x): ... print(id(x)) ... >>> id(a) # id is a unique identifier of an object 148293216 # may vary >>> f(a) 148293216 # may vary View or Shallow Copy Different array objects can share the same data. The view method creates a new array object that looks at the same data. >>> >>> c = a.view() >>> c is a False >>> c.base is a # c is a view of the data owned by a True >>> c.flags.owndata False >>> >>> c = c.reshape((2, 6)) # a's shape doesn't change >>> a.shape (3, 4) >>> c[0, 4] = 1234 # a's data changes >>> a array([[ 0, 1, 2, 3], [1234, 5, 6, 7], [ 8, 9, 10, 11]]) Slicing an array returns a view of it: >>> >>> s = a[ : , 1:3] # spaces added for clarity; could also be written "s = a[:, 1:3]" >>> s[:] = 10 # s[:] is a view of s. Note the difference between s = 10 and s[:] = 10 >>> a array([[ 0, 10, 10, 3], [1234, 10, 10, 7], [ 8, 10, 10, 11]]) Deep Copy The copy method makes a complete copy of the array and its data. >>> >>> d = a.copy() # a new array object with new data is created >>> d is a False >>> d.base is a # d doesn't share anything with a False >>> d[0,0] = 9999 >>> a array([[ 0, 10, 10, 3], [1234, 10, 10, 7], [ 8, 10, 10, 11]]) Sometimes copy should be called after slicing if the original array is not required anymore. For example, suppose a is a huge intermediate result and the final result b only contains a small fraction of a, a deep copy should be made when constructing b with slicing: >>> >>> a = np.arange(int(1e8)) >>> b = a[:100].copy() >>> del a # the memory of ``a`` can be released. If b = a[:100] is used instead, a is referenced by b and will persist in memory even if del a is executed. Functions and Methods Overview Here is a list of some useful NumPy functions and methods names ordered in categories. See Routines for the full list. Array Creation arange, array, copy, empty, empty_like, eye, fromfile, fromfunction, identity, linspace, logspace, mgrid, ogrid, ones, ones_like, r_, zeros, zeros_like Conversions ndarray.astype, atleast_1d, atleast_2d, atleast_3d, mat Manipulations array_split, column_stack, concatenate, diagonal, dsplit, dstack, hsplit, hstack, ndarray.item, newaxis, ravel, repeat, reshape, resize, squeeze, swapaxes, take, transpose, vsplit, vstack Questions all, any, nonzero, where Ordering argmax, argmin, argsort, max, min, ptp, searchsorted, sort Operations choose, compress, cumprod, cumsum, inner, ndarray.fill, imag, prod, put, putmask, real, sum Basic Statistics cov, mean, std, var Basic Linear Algebra cross, dot, outer, linalg.svd, vdot Less Basic Broadcasting rules Broadcasting allows universal functions to deal in a meaningful way with inputs that do not have exactly the same shape. The first rule of broadcasting is that if all input arrays do not have the same number of dimensions, a “1” will be repeatedly prepended to the shapes of the smaller arrays until all the arrays have the same number of dimensions. The second rule of broadcasting ensures that arrays with a size of 1 along a particular dimension act as if they had the size of the array with the largest shape along that dimension. The value of the array element is assumed to be the same along that dimension for the “broadcast” array. After application of the broadcasting rules, the sizes of all arrays must match. More details can be found in Broadcasting. Advanced indexing and index tricks NumPy offers more indexing facilities than regular Python sequences. In addition to indexing by integers and slices, as we saw before, arrays can be indexed by arrays of integers and arrays of booleans. Indexing with Arrays of Indices >>> >>> a = np.arange(12)**2 # the first 12 square numbers >>> i = np.array([1, 1, 3, 8, 5]) # an array of indices >>> a[i] # the elements of a at the positions i array([ 1, 1, 9, 64, 25]) >>> >>> j = np.array([[3, 4], [9, 7]]) # a bidimensional array of indices >>> a[j] # the same shape as j array([[ 9, 16], [81, 49]]) When the indexed array a is multidimensional, a single array of indices refers to the first dimension of a. The following example shows this behavior by converting an image of labels into a color image using a palette. >>> >>> palette = np.array([[0, 0, 0], # black ... [255, 0, 0], # red ... [0, 255, 0], # green ... [0, 0, 255], # blue ... [255, 255, 255]]) # white >>> image = np.array([[0, 1, 2, 0], # each value corresponds to a color in the palette ... [0, 3, 4, 0]]) >>> palette[image] # the (2, 4, 3) color image array([[[ 0, 0, 0], [255, 0, 0], [ 0, 255, 0], [ 0, 0, 0]], [[ 0, 0, 0], [ 0, 0, 255], [255, 255, 255], [ 0, 0, 0]]]) We can also give indexes for more than one dimension. The arrays of indices for each dimension must have the same shape. >>> >>> a = np.arange(12).reshape(3,4) >>> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> i = np.array([[0, 1], # indices for the first dim of a ... [1, 2]]) >>> j = np.array([[2, 1], # indices for the second dim ... [3, 3]]) >>> >>> a[i, j] # i and j must have equal shape array([[ 2, 5], [ 7, 11]]) >>> >>> a[i, 2] array([[ 2, 6], [ 6, 10]]) >>> >>> a[:, j] # i.e., a[ : , j] array([[[ 2, 1], [ 3, 3]], [[ 6, 5], [ 7, 7]], [[10, 9], [11, 11]]]) In Python, arr[i, j] is exactly the same as arr[(i, j)]—so we can put i and j in a tuple and then do the indexing with that. >>> >>> l = (i, j) # equivalent to a[i, j] >>> a[l] array([[ 2, 5], [ 7, 11]]) However, we can not do this by putting i and j into an array, because this array will be interpreted as indexing the first dimension of a. >>> >>> s = np.array([i, j]) # not what we want >>> a[s] Traceback (most recent call last): File "<stdin>", line 1, in <module> IndexError: index 3 is out of bounds for axis 0 with size 3 # same as a[i, j] >>> a[tuple(s)] array([[ 2, 5], [ 7, 11]]) Another common use of indexing with arrays is the search of the maximum value of time-dependent series: >>> >>> time = np.linspace(20, 145, 5) # time scale >>> data = np.sin(np.arange(20)).reshape(5,4) # 4 time-dependent series >>> time array([ 20. , 51.25, 82.5 , 113.75, 145. ]) >>> data array([[ 0. , 0.84147098, 0.90929743, 0.14112001], [-0.7568025 , -0.95892427, -0.2794155 , 0.6569866 ], [ 0.98935825, 0.41211849, -0.54402111, -0.99999021], [-0.53657292, 0.42016704, 0.99060736, 0.65028784], [-0.28790332, -0.96139749, -0.75098725, 0.14987721]]) # index of the maxima for each series >>> ind = data.argmax(axis=0) >>> ind array([2, 0, 3, 1]) # times corresponding to the maxima >>> time_max = time[ind] >>> >>> data_max = data[ind, range(data.shape[1])] # => data[ind[0],0], data[ind[1],1]... >>> time_max array([ 82.5 , 20. , 113.75, 51.25]) >>> data_max array([0.98935825, 0.84147098, 0.99060736, 0.6569866 ]) >>> np.all(data_max == data.max(axis=0)) True You can also use indexing with arrays as a target to assign to: >>> >>> a = np.arange(5) >>> a array([0, 1, 2, 3, 4]) >>> a[[1,3,4]] = 0 >>> a array([0, 0, 2, 0, 0]) However, when the list of indices contains repetitions, the assignment is done several times, leaving behind the last value: >>> >>> a = np.arange(5) >>> a[[0,0,2]]=[1,2,3] >>> a array([2, 1, 3, 3, 4]) This is reasonable enough, but watch out if you want to use Python’s += construct, as it may not do what you expect: >>> >>> a = np.arange(5) >>> a[[0,0,2]]+=1 >>> a array([1, 1, 3, 3, 4]) Even though 0 occurs twice in the list of indices, the 0th element is only incremented once. This is because Python requires “a+=1” to be equivalent to “a = a + 1”. Indexing with Boolean Arrays When we index arrays with arrays of (integer) indices we are providing the list of indices to pick. With boolean indices the approach is different; we explicitly choose which items in the array we want and which ones we don’t. The most natural way one can think of for boolean indexing is to use boolean arrays that have the same shape as the original array: >>> >>> a = np.arange(12).reshape(3,4) >>> b = a > 4 >>> b # b is a boolean with a's shape array([[False, False, False, False], [False, True, True, True], [ True, True, True, True]]) >>> a[b] # 1d array with the selected elements array([ 5, 6, 7, 8, 9, 10, 11]) This property can be very useful in assignments: >>> >>> a[b] = 0 # All elements of 'a' higher than 4 become 0 >>> a array([[0, 1, 2, 3], [4, 0, 0, 0], [0, 0, 0, 0]]) You can look at the following example to see how to use boolean indexing to generate an image of the Mandelbrot set: >>> import numpy as np import matplotlib.pyplot as plt def mandelbrot( h,w, maxit=20 ): """Returns an image of the Mandelbrot fractal of size (h,w).""" y,x = np.ogrid[ -1.4:1.4:h*1j, -2:0.8:w*1j ] c = x+y*1j z = c divtime = maxit + np.zeros(z.shape, dtype=int) for i in range(maxit): z = z**2 + c diverge = z*np.conj(z) > 2**2 # who is diverging div_now = diverge & (divtime==maxit) # who is diverging now divtime[div_now] = i # note when z[diverge] = 2 # avoid diverging too much return divtime plt.imshow(mandelbrot(400,400)) ../_images/quickstart-1.png The second way of indexing with booleans is more similar to integer indexing; for each dimension of the array we give a 1D boolean array selecting the slices we want: >>> >>> a = np.arange(12).reshape(3,4) >>> b1 = np.array([False,True,True]) # first dim selection >>> b2 = np.array([True,False,True,False]) # second dim selection >>> >>> a[b1,:] # selecting rows array([[ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> >>> a[b1] # same thing array([[ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> >>> a[:,b2] # selecting columns array([[ 0, 2], [ 4, 6], [ 8, 10]]) >>> >>> a[b1,b2] # a weird thing to do array([ 4, 10]) Note that the length of the 1D boolean array must coincide with the length of the dimension (or axis) you want to slice. In the previous example, b1 has length 3 (the number of rows in a), and b2 (of length 4) is suitable to index the 2nd axis (columns) of a. The ix_() function The ix_ function can be used to combine different vectors so as to obtain the result for each n-uplet. For example, if you want to compute all the a+b*c for all the triplets taken from each of the vectors a, b and c: >>> >>> a = np.array([2,3,4,5]) >>> b = np.array([8,5,4]) >>> c = np.array([5,4,6,8,3]) >>> ax,bx,cx = np.ix_(a,b,c) >>> ax array([[[2]], [[3]], [[4]], [[5]]]) >>> bx array([[[8], [5], [4]]]) >>> cx array([[[5, 4, 6, 8, 3]]]) >>> ax.shape, bx.shape, cx.shape ((4, 1, 1), (1, 3, 1), (1, 1, 5)) >>> result = ax+bx*cx >>> result array([[[42, 34, 50, 66, 26], [27, 22, 32, 42, 17], [22, 18, 26, 34, 14]], [[43, 35, 51, 67, 27], [28, 23, 33, 43, 18], [23, 19, 27, 35, 15]], [[44, 36, 52, 68, 28], [29, 24, 34, 44, 19], [24, 20, 28, 36, 16]], [[45, 37, 53, 69, 29], [30, 25, 35, 45, 20], [25, 21, 29, 37, 17]]]) >>> result[3,2,4] 17 >>> a[3]+b[2]*c[4] 17 You could also implement the reduce as follows: >>> >>> def ufunc_reduce(ufct, *vectors): ... vs = np.ix_(*vectors) ... r = ufct.identity ... for v in vs: ... r = ufct(r,v) ... return r and then use it as: >>> >>> ufunc_reduce(np.add,a,b,c) array([[[15, 14, 16, 18, 13], [12, 11, 13, 15, 10], [11, 10, 12, 14, 9]], [[16, 15, 17, 19, 14], [13, 12, 14, 16, 11], [12, 11, 13, 15, 10]], [[17, 16, 18, 20, 15], [14, 13, 15, 17, 12], [13, 12, 14, 16, 11]], [[18, 17, 19, 21, 16], [15, 14, 16, 18, 13], [14, 13, 15, 17, 12]]]) The advantage of this version of reduce compared to the normal ufunc.reduce is that it makes use of the Broadcasting Rules in order to avoid creating an argument array the size of the output times the number of vectors. Indexing with strings See Structured arrays. Linear Algebra Work in progress. Basic linear algebra to be included here. Simple Array Operations See linalg.py in numpy folder for more. >>> >>> import numpy as np >>> a = np.array([[1.0, 2.0], [3.0, 4.0]]) >>> print(a) [[1. 2.] [3. 4.]] >>> a.transpose() array([[1., 3.], [2., 4.]]) >>> np.linalg.inv(a) array([[-2. , 1. ], [ 1.5, -0.5]]) >>> u = np.eye(2) # unit 2x2 matrix; "eye" represents "I" >>> u array([[1., 0.], [0., 1.]]) >>> j = np.array([[0.0, -1.0], [1.0, 0.0]]) >>> j @ j # matrix product array([[-1., 0.], [ 0., -1.]]) >>> np.trace(u) # trace 2.0 >>> y = np.array([[5.], [7.]]) >>> np.linalg.solve(a, y) array([[-3.], [ 4.]]) >>> np.linalg.eig(j) (array([0.+1.j, 0.-1.j]), array([[0.70710678+0.j , 0.70710678-0.j ], [0. -0.70710678j, 0. +0.70710678j]])) Parameters: square matrix Returns The eigenvalues, each repeated according to its multiplicity. The normalized (unit "length") eigenvectors, such that the column ``v[:,i]`` is the eigenvector corresponding to the eigenvalue ``w[i]`` . Tricks and Tips Here we give a list of short and useful tips. “Automatic” Reshaping To change the dimensions of an array, you can omit one of the sizes which will then be deduced automatically: >>> >>> a = np.arange(30) >>> b = a.reshape((2, -1, 3)) # -1 means "whatever is needed" >>> b.shape (2, 5, 3) >>> b array([[[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11], [12, 13, 14]], [[15, 16, 17], [18, 19, 20], [21, 22, 23], [24, 25, 26], [27, 28, 29]]]) Vector Stacking How do we construct a 2D array from a list of equally-sized row vectors? In MATLAB this is quite easy: if x and y are two vectors of the same length you only need do m=[x;y]. In NumPy this works via the functions column_stack, dstack, hstack and vstack, depending on the dimension in which the stacking is to be done. For example: >>> >>> x = np.arange(0,10,2) >>> y = np.arange(5) >>> m = np.vstack([x,y]) >>> m array([[0, 2, 4, 6, 8], [0, 1, 2, 3, 4]]) >>> xy = np.hstack([x,y]) >>> xy array([0, 2, 4, 6, 8, 0, 1, 2, 3, 4]) The logic behind those functions in more than two dimensions can be strange. See also NumPy for Matlab users Histograms The NumPy histogram function applied to an array returns a pair of vectors: the histogram of the array and a vector of the bin edges. Beware: matplotlib also has a function to build histograms (called hist, as in Matlab) that differs from the one in NumPy. The main difference is that pylab.hist plots the histogram automatically, while numpy.histogram only generates the data. >>> import numpy as np rg = np.random.default_rng(1) import matplotlib.pyplot as plt # Build a vector of 10000 normal deviates with variance 0.5^2 and mean 2 mu, sigma = 2, 0.5 v = rg.normal(mu,sigma,10000) # Plot a normalized histogram with 50 bins plt.hist(v, bins=50, density=1) # matplotlib version (plot) # Compute the histogram with numpy and then plot it (n, bins) = np.histogram(v, bins=50, density=True) # NumPy version (no plot) plt.plot(.5*(bins[1:]+bins[:-1]), n) ../_images/quickstart-2.png Further reading The Python tutorial NumPy Reference SciPy Tutorial SciPy Lecture Notes A matlab, R, IDL, NumPy/SciPy dictionary © Copyright 2008-2020, The SciPy community. Last updated on Jun 29, 2020. Created using Sphinx 2.4.4.
9
star
6

Denial-of-Service-DoS-DDoS-

A denial of service attack floods a website with a huge amount of Internet traffic, causing its servers to become overwhelmed and crash. Most DDoS attacks are carried out using computers that have been compromised with malware. The owners of infected computers may not even be aware that their machine is sending requests for data to your website. Denial of service attacks can be prevented by: Rate limiting your web server’s router Adding filters to your router to drop packets from dubious sources Dropping spoofed or malformed packets Setting more aggressive timeouts on connections Using firewalls with DDoS protection Using third-party DDoS mitigation software from Akamai, Cloudflare, VeriSign, Arbor Networks or another provider
9
star
7

Robot-learning

In developmental robotics, robot learning algorithms generate their own sequences of learning experiences, also known as a curriculum, to cumulatively acquire new skills through self-guided exploration and social interaction with humans. These robots use guidance mechanisms such as active learning, maturation, motor synergies and imitation. Association rules Main article: Association rule learning See also: Inductive logic programming Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of "interestingness".[60] Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves "rules" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system. This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction.[61] Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems. Based on the concept of strong rules, Rakesh Agrawal, Tomasz Imieliński and Arun Swami introduced association rules for discovering regularities between products in large-scale transaction data recorded by point-of-sale (POS) systems in supermarkets.[62] For example, the rule {\displaystyle \{\mathrm {onions,potatoes} \}\Rightarrow \{\mathrm {burger} \}}\{{\mathrm {onions,potatoes}}\}\Rightarrow \{{\mathrm {burger}}\} found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as promotional pricing or product placements. In addition to market basket analysis, association rules are employed today in application areas including Web usage mining, intrusion detection, continuous production, and bioinformatics. In contrast with sequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions. Learning classifier systems (LCS) are a family of rule-based machine learning algorithms that combine a discovery component, typically a genetic algorithm, with a learning component, performing either supervised learning, reinforcement learning, or unsupervised learning. They seek to identify a set of context-dependent rules that collectively store and apply knowledge in a piecewise manner in order to make predictions.[63] Inductive logic programming (ILP) is an approach to rule-learning using logic programming as a uniform representation for input examples, background knowledge, and hypotheses. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program that entails all positive and no negative examples. Inductive programming is a related field that considers any kind of programming language for representing hypotheses (and not only logic programming), such as functional programs. Inductive logic programming is particularly useful in bioinformatics and natural language processing. Gordon Plotkin and Ehud Shapiro laid the initial theoretical foundation for inductive machine learning in a logical setting.[64][65][66] Shapiro built their first implementation (Model Inference System) in 1981: a Prolog program that inductively inferred logic programs from positive and negative examples.[67] The term inductive here refers to philosophical induction, suggesting a theory to explain observed facts, rather than mathematical induction, proving a property for all members of a well-ordered set. Models Performing machine learning involves creating a model, which is trained on some training data and then can process additional data to make predictions. Various types of models have been used and researched for machine learning systems. Artificial neural networks Main article: Artificial neural network See also: Deep learning An artificial neural network is an interconnected group of nodes, akin to the vast network of neurons in a brain. Here, each circular node represents an artificial neuron and an arrow represents a connection from the output of one artificial neuron to the input of another. Artificial neural networks (ANNs), or connectionist systems, are computing systems vaguely inspired by the biological neural networks that constitute animal brains. Such systems "learn" to perform tasks by considering examples, generally without being programmed with any task-specific rules. An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit information, a "signal", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called "edges". Artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times. The original goal of the ANN approach was to solve problems in the same way that a human brain would. However, over time, attention moved to performing specific tasks, leading to deviations from biology. Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis. Deep learning consists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition.[68]
8
star
8

Social-engineering-techniques

In some cases, the greatest weakness in a website’s security system is the people that use it. Social engineering seeks to exploit this weakness. A hacker will convince a website user or administrator to divulge some useful information that helps them exploit the website. There are many forms of social engineering attacks, including: Phishing Users of a website are sent fraudulent emails that look like they have come from the website. The user is asked to divulge some information, such as their login details or personal information. The hacker can use this information to compromises the website. Baiting This is a classic social engineering technique that was first used in the 1970s. A hacker will leave a device near your place of business, perhaps marked with a label like “employee salaries”. One of your employees might pick it up and insert it into their computer out of curiosity. The USB stick will contain malware that infects your computer networks and compromises your website. Pretexting A hacker will contact you, one of your customers or an employee and pretend to be someone else. They will demand sensitive information, which they use to compromise your website. The best way to eliminate social engineering attacks is to educate your employees and customers about these kinds of attacks.
7
star
9

web

HTML
6
star
10

Non-targeted-website-hacking

In many cases, hackers won’t specifically target your website. They will be targeting a vulnerability that exists for a content management system, plugin, or template. For example, they may have developed a hack that targets a vulnerability in a particular version of WordPress, Joomla, or another content management system. They will use automated bots to find websites using this version of the content management system in question before launching an attack. They might use the vulnerability to delete data from your website, steal sensitive information, or to insert malicious software onto your server. The best way to avoid website hacking attacks is to ensure your content management system, plugins, and templates are all up-to-date.
6
star
11

epass

Using this tool, you can create a password in a more realistic list using people analysis
5
star
12

Recognition

The classical problem in computer vision, image processing, and machine vision is that of determining whether or not the image data contains some specific object, feature, or activity. Different varieties of the recognition problem are described in the literature:[citation needed] Object recognition (also called object classification) – one or several pre-specified or learned objects or object classes can be recognized, usually together with their 2D positions in the image or 3D poses in the scene. Blippar, Google Goggles and LikeThat provide stand-alone programs that illustrate this functionality. Identification – an individual instance of an object is recognized. Examples include identification of a specific person's face or fingerprint, identification of handwritten digits, or identification of a specific vehicle. Detection – the image data are scanned for a specific condition. Examples include detection of possible abnormal cells or tissues in medical images or detection of a vehicle in an automatic road toll system. Detection based on relatively simple and fast computations is sometimes used for finding smaller regions of interesting image data which can be further analyzed by more computationally demanding techniques to produce a correct interpretation. Currently, the best algorithms for such tasks are based on convolutional neural networks. An illustration of their capabilities is given by the ImageNet Large Scale Visual Recognition Challenge; this is a benchmark in object classification and detection, with millions of images and 1000 object classes used in the competition.[29] Performance of convolutional neural networks on the ImageNet tests is now close to that of humans.[29] The best algorithms still struggle with objects that are small or thin, such as a small ant on a stem of a flower or a person holding a quill in their hand. They also have trouble with images that have been distorted with filters (an increasingly common phenomenon with modern digital cameras). By contrast, those kinds of images rarely trouble humans. Humans, however, tend to have trouble with other issues. For example, they are not good at classifying objects into fine-grained classes, such as the particular breed of dog or species of bird, whereas convolutional neural networks handle this with ease[citation needed]. Several specialized tasks based on recognition exist, such as: Content-based image retrieval – finding all images in a larger set of images which have a specific content. The content can be specified in different ways, for example in terms of similarity relative a target image (give me all images similar to image X), or in terms of high-level search criteria given as text input (give me all images which contain many houses, are taken during winter, and have no cars in them). Computer vision for people counter purposes in public places, malls, shopping centres Pose estimation – estimating the position or orientation of a specific object relative to the camera. An example application for this technique would be assisting a robot arm in retrieving objects from a conveyor belt in an assembly line situation or picking parts from a bin. Optical character recognition (OCR) – identifying characters in images of printed or handwritten text, usually with a view to encoding the text in a format more amenable to editing or indexing (e.g. ASCII). 2D code reading – reading of 2D codes such as data matrix and QR codes. Facial recognition Shape Recognition Technology (SRT) in people counter systems differentiating human beings (head and shoulder patterns) from objects
4
star
13

Military

Military applications are probably one of the largest areas for computer vision. The obvious examples are detection of enemy soldiers or vehicles and missile guidance. More advanced systems for missile guidance send the missile to an area rather than a specific target, and target selection is made when the missile reaches the area based on locally acquired image data. Modern military concepts, such as "battlefield awareness", imply that various sensors, including image sensors, provide a rich set of information about a combat scene which can be used to support strategic decisions. In this case, automatic processing of the data is used to reduce complexity and to fuse information from multiple sensors to increase reliability.
4
star
14

Overview

Knowledge-representation is a field of artificial intelligence that focuses on designing computer representations that capture information about the world that can be used to solve complex problems. The justification for knowledge representation is that conventional procedural code is not the best formalism to use to solve complex problems. Knowledge representation makes complex software easier to define and maintain than procedural code and can be used in expert systems. For example, talking to experts in terms of business rules rather than code lessens the semantic gap between users and developers and makes development of complex systems more practical. Knowledge representation goes hand in hand with automated reasoning because one of the main purposes of explicitly representing knowledge is to be able to reason about that knowledge, to make inferences, assert new knowledge, etc. Virtually all knowledge representation languages have a reasoning or inference engine as part of the system.[10] A key trade-off in the design of a knowledge representation formalism is that between expressivity and practicality. The ultimate knowledge representation formalism in terms of expressive power and compactness is First Order Logic (FOL). There is no more powerful formalism than that used by mathematicians to define general propositions about the world. However, FOL has two drawbacks as a knowledge representation formalism: ease of use and practicality of implementation. First order logic can be intimidating even for many software developers. Languages that do not have the complete formal power of FOL can still provide close to the same expressive power with a user interface that is more practical for the average developer to understand. The issue of practicality of implementation is that FOL in some ways is too expressive. With FOL it is possible to create statements (e.g. quantification over infinite sets) that would cause a system to never terminate if it attempted to verify them. Thus, a subset of FOL can be both easier to use and more practical to implement. This was a driving motivation behind rule-based expert systems. IF-THEN rules provide a subset of FOL but a very useful one that is also very intuitive. The history of most of the early AI knowledge representation formalisms; from databases to semantic nets to theorem provers and production systems can be viewed as various design decisions on whether to emphasize expressive power or computability and efficiency.[11] In a key 1993 paper on the topic, Randall Davis of MIT outlined five distinct roles to analyze a knowledge representation framework:[12] A knowledge representation (KR) is most fundamentally a surrogate, a substitute for the thing itself, used to enable an entity to determine consequences by thinking rather than acting, i.e., by reasoning about the world rather than taking action in it. It is a set of ontological commitments, i.e., an answer to the question: In what terms should I think about the world? It is a fragmentary theory of intelligent reasoning, expressed in terms of three components: (i) the representation's fundamental conception of intelligent reasoning; (ii) the set of inferences the representation sanctions; and (iii) the set of inferences it recommends. It is a medium for pragmatically efficient computation, i.e., the computational environment in which thinking is accomplished. One contribution to this pragmatic efficiency is supplied by the guidance a representation provides for organizing information so as to facilitate making the recommended inferences. It is a medium of human expression, i.e., a language in which we say things about the world. Knowledge representation and reasoning are a key enabling technology for the Semantic Web. Languages based on the Frame model with automatic classification provide a layer of semantics on top of the existing Internet. Rather than searching via text strings as is typical today, it will be possible to define logical queries and find pages that map to those queries.[13] The automated reasoning component in these systems is an engine known as the classifier. Classifiers focus on the subsumption relations in a knowledge base rather than rules. A classifier can infer new classes and dynamically change the ontology as new information becomes available. This capability is ideal for the ever-changing and evolving information space of the Internet.[14] The Semantic Web integrates concepts from knowledge representation and reasoning with markup languages based on XML. The Resource Description Framework (RDF) provides the basic capabilities to define knowledge-based objects on the Internet with basic features such as Is-A relations and object properties. The Web Ontology Language (OWL) adds additional semantics and integrates with automatic classification reasoners.[15]
3
star
15

dexs-code

Build a simple structure for hackers and use it
3
star
16

boot-grub

How do I boot my PC from GRUB? [duplicate] Asked 3 years, 4 months ago Active 2 years, 5 months ago Viewed 107k times 16 12 This question already has answers here: How can I repair grub? (How to get Ubuntu back after installing Windows?) (14 answers) Closed 2 years ago. I can no longer boot Ubuntu following a corruption problem initially reported here (How do I solve the "invalid arch dependent elf magic" error message). When I power my laptop, I now get the following message: GNU GRUB version 2.02~beta2-9ubuntu1.7 Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists possible device or file completions." and then the prompt grub> Can anyone help me get back to Ubuntu ? boot grub2 share improve this question follow edited Jul 2 '17 at 18:17 You'reAGitForNotUsingGit 13.4k99 gold badges3838 silver badges7373 bronze badges asked Jun 28 '17 at 7:49 Mons 15911 gold badge33 silver badges77 bronze badges Comments are not for extended discussion; this conversation has been moved to chat. – Thomas Ward♦ Jul 2 '17 at 19:03 Big thumbs up to everybody. I chased down a few leads and finally solved the problem by booting from a live CD and entering the command "sudo update-grub" @derHugo your help was massively useful. – Mons Jul 2 '17 at 19:29 1 The various hints given by @derHugo can be found on the chat site on Stack Exchange here: chat.stackexchange.com/rooms/61451/… and in particular when he recommended googling the question, here: "Haha sorry always forget that ^^ anyway please Google it there are tons of articles addressing this issue e.g. here is another one". Anyway, I'm absolutely made up to be up and running on my laptop again. I know I'm not supposed to say thanks, but I will all the same. THANKS ! – Mons Jul 2 '17 at 21:59 add a comment 3 Answers 35 Ok, from grub type ls (hd0,1)/ you should see a file named vmlinuz or linux, and initrd.img Type linux (hd0,1)/vmlinuz root=/dev/sda1 or linux (hd0,1)/linux root=/dev/sda1 depending on what you found with ls (hd0,1)/, then: initrd (hd0,1)/initrd.img boot If you get initramfs rescue mode enter your password, then startx. You should now have a desktop. Use gparted to check your file system, if it reports an error, then you need to boot from a LiveCD or other media to fix it .... DO NOT attempt to repair a mounted partition. The following three commands fix many grub boot problems. They run quick so just do all three instead of trying to find which one you need. sudo grub-install /dev/sda && sudo update-grub && sudo update-initramfs -u Reboot and see what you get. share improve this answer follow edited Feb 2 '18 at 5:55 answered Jul 2 '17 at 18:49 ravery 6,11955 gold badges1616 silver badges3535 bronze badges 3 Is it really innitrd or should it be initrd? – WinEunuuchs2Unix Jul 2 '17 at 19:52 1 It is actually initrd – answerSeeker Jul 2 '17 at 20:19 1 @ravery Yes I could have corrected the typo. I have a weird point of view thinking it's more polite to point out to the author and letting them make the change. That way an otherwise spotless answer doesn't have the "blemish" of an "edited by someone else" flag. I do edit new users' questions frequently though... Have another +1 :) – WinEunuuchs2Unix Jul 2 '17 at 22:18 1 IF you stuck in "INITRAMFS" not in a fully booted system you can't use startx cause there is no desktop there, what you are referring as initramfs should be multi-user.target. – Ravexina♦ Jul 5 '17 at 8:53 3 In case the files above aren't listed, try iterating on ls (hd0,1)/ command, as in: ls (hd0,2)/, ls (hd0,3)/, etc. as suggested in here. – Nae Dec 23 '18 at 17:46 show 9 more comments 1 The most probable cause to that issue is installing the OS to a disk, grub to a different disk that is not removable. Then removing the OS disk. You could just plug the USB stick back in. Problem solved. share improve this answer follow edited Jul 2 '17 at 19:17 answered Jul 2 '17 at 19:02 RobotHumans 26.2k33 gold badges6969 silver badges109109 bronze badges Solution found (see above). – Mons Jul 2 '17 at 19:29 I saw the chat conversation. Solved with update-grub, which supports not refutes this answer. – RobotHumans Jul 2 '17 at 19:32 add a comment 1 Restart your system. Press f2 key while loading. Goto boot option. Press f5/f6 to change values (which os you want to install keep it in first place.). Enter f10 key....It may solve your problem. . . . If not enter this in grub rescue mode.... ls (hd0) (hd0,msdos6) (hd0,msdos5)....(hd0,msdos1) OR (hd0) (hd0,gpt6).....(hd0,gpt1) set boot=(hd0,gpt6) OR set boot=(hd0,msdos6) set prefix=(hd0,gpt6)/boot/grub OR use msdos6 instead. insmod normal normal This may solve your problem. share improve this answer follow answered Jul 9 '17 at 8:11 Pavan Rao 1122 bronze badges these are various ways to launch grub. this is not the issue as OP states. grub launches but drops to recovery mode. The issue is that the grub config file is missing – ravery Jul 9 '17 at 8:23 add a comment Not the answer you're looking for? Browse other questions tagged boot grub2 or ask your own question.
3
star
17

History

The earliest work in computerized knowledge representation was focused on general problem solvers such as the General Problem Solver (GPS) system developed by Allen Newell and Herbert A. Simon in 1959. These systems featured data structures for planning and decomposition. The system would begin with a goal. It would then decompose that goal into sub-goals and then set out to construct strategies that could accomplish each subgoal. In these early days of AI, general search algorithms such as A* were also developed. However, the amorphous problem definitions for systems such as GPS meant that they worked only for very constrained toy domains (e.g. the "blocks world"). In order to tackle non-toy problems, AI researchers such as Ed Feigenbaum and Frederick Hayes-Roth realized that it was necessary to focus systems on more constrained problems. These efforts led to the cognitive revolution in psychology and to the phase of AI focused on knowledge representation that resulted in expert systems in the 1970s and 80s, production systems, frame languages, etc. Rather than general problem solvers, AI changed its focus to expert systems that could match human competence on a specific task, such as medical diagnosis. Expert systems gave us the terminology still in use today where AI systems are divided into a Knowledge Base with facts about the world and rules and an inference engine that applies the rules to the knowledge base in order to answer questions and solve problems. In these early systems the knowledge base tended to be a fairly flat structure, essentially assertions about the values of variables used by the rules.[2] In addition to expert systems, other researchers developed the concept of frame-based languages in the mid-1980s. A frame is similar to an object class: It is an abstract description of a category describing things in the world, problems, and potential solutions. Frames were originally used on systems geared toward human interaction, e.g. understanding natural language and the social settings in which various default expectations such as ordering food in a restaurant narrow the search space and allow the system to choose appropriate responses to dynamic situations. It was not long before the frame communities and the rule-based researchers realized that there was synergy between their approaches. Frames were good for representing the real world, described as classes, subclasses, slots (data values) with various constraints on possible values. Rules were good for representing and utilizing complex logic such as the process to make a medical diagnosis. Integrated systems were developed that combined Frames and Rules. One of the most powerful and well known was the 1983 Knowledge Engineering Environment (KEE) from Intellicorp. KEE had a complete rule engine with forward and backward chaining. It also had a complete frame based knowledge base with triggers, slots (data values), inheritance, and message passing. Although message passing originated in the object-oriented community rather than AI it was quickly embraced by AI researchers as well in environments such as KEE and in the operating systems for Lisp machines from Symbolics, Xerox, and Texas Instruments.[3] The integration of Frames, rules, and object-oriented programming was significantly driven by commercial ventures such as KEE and Symbolics spun off from various research projects. At the same time as this was occurring, there was another strain of research that was less commercially focused and was driven by mathematical logic and automated theorem proving. One of the most influential languages in this research was the KL-ONE language of the mid-'80s. KL-ONE was a frame language that had a rigorous semantics, formal definitions for concepts such as an Is-A relation.[4] KL-ONE and languages that were influenced by it such as Loom had an automated reasoning engine that was based on formal logic rather than on IF-THEN rules. This reasoner is called the classifier. A classifier can analyze a set of declarations and infer new assertions, for example, redefine a class to be a subclass or superclass of some other class that wasn't formally specified. In this way the classifier can function as an inference engine, deducing new facts from an existing knowledge base. The classifier can also provide consistency checking on a knowledge base (which in the case of KL-ONE languages is also referred to as an Ontology).[5] Another area of knowledge representation research was the problem of common sense reasoning. One of the first realizations learned from trying to make software that can function with human natural language was that humans regularly draw on an extensive foundation of knowledge about the real world that we simply take for granted but that is not at all obvious to an artificial agent. Basic principles of common sense physics, causality, intentions, etc. An example is the frame problem, that in an event driven logic there need to be axioms that state things maintain position from one moment to the next unless they are moved by some external force. In order to make a true artificial intelligence agent that can converse with humans using natural language and can process basic statements and questions about the world, it is essential to represent this kind of knowledge. One of the most ambitious programs to tackle this problem was Doug Lenat's Cyc project. Cyc established its own Frame language and had large numbers of analysts document various areas of common sense reasoning in that language. The knowledge recorded in Cyc included common sense models of time, causality, physics, intentions, and many others.[6] The starting point for knowledge representation is the knowledge representation hypothesis first formalized by Brian C. Smith in 1985:[7] Any mechanically embodied intelligent process will be comprised of structural ingredients that a) we as external observers naturally take to represent a propositional account of the knowledge that the overall process exhibits, and b) independent of such external semantic attribution, play a formal but causal and essential role in engendering the behavior that manifests that knowledge. Currently one of the most active areas of knowledge representation research are projects associated with the Semantic Web. The Semantic Web seeks to add a layer of semantics (meaning) on top of the current Internet. Rather than indexing web sites and pages via keywords, the Semantic Web creates large ontologies of concepts. Searching for a concept will be more effective than traditional text only searches. Frame languages and automatic classification play a big part in the vision for the future Semantic Web. The automatic classification gives developers technology to provide order on a constantly evolving network of knowledge. Defining ontologies that are static and incapable of evolving on the fly would be very limiting for Internet-based systems. The classifier technology provides the ability to deal with the dynamic environment of the Internet. Recent projects funded primarily by the Defense Advanced Research Projects Agency (DARPA) have integrated frame languages and classifiers with markup languages based on XML. The Resource Description Framework (RDF) provides the basic capability to define classes, subclasses, and properties of objects. The Web Ontology Language (OWL) provides additional levels of semantics and enables integration with classification engines.[8][9]
3
star
18

wizinit-pro

HTML
3
star
19

An-ANN

An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit information, a "signal", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called "edges". Artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times. The original goal of the ANN approach was to solve problems in the same way that a human brain would. However, over time, attention moved to performing specific tasks, leading to deviations from biology. Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis. Deep learning consists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition.[68]
3
star
20

DNS-Spoofing-DNS-cache-poisoning-

This hacking technique injects corrupt domain system data into a DNS resolver’s cache to redirect where a website’s traffic is sent. It is often used to send traffic from legitimate websites to malicious websites that contain malware. DNS spoofing can also be used to gather information about the traffic being diverted. The best techniques for preventing DNS spoofing is to set short TTL times and regularly clear the DNS caches of local machines.
3
star
21

Self-learning

Self-learning as a machine learning paradigm was introduced in 1982 along with a neural network capable of self-learning named crossbar adaptive array (CAA).[44] It is a learning with no external rewards and no external teacher advice. The CAA self-learning algorithm computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about consequence situations. The system is driven by the interaction between cognition and emotion.[45] The self-learning algorithm updates a memory matrix W =||w(a,s)|| such that in each iteration executes the following machine learning routine:
3
star
22

Machine-Vision

A second application area in computer vision is in industry, sometimes called machine vision, where information is extracted for the purpose of supporting a manufacturing process. One example is quality control where details or final products are being automatically inspected in order to find defects. Another example is measurement of position and orientation of details to be picked up by a robot arm. Machine vision is also heavily used in agricultural process to remove undesirable food stuff from bulk material, a process called optical sorting.[25]
3
star
23

Reinforcement-learning

Reinforcement learning is an area of machine learning concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. Due to its generality, the field is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, statistics and genetic algorithms. In machine learning, the environment is typically represented as a Markov decision process (MDP). Many reinforcement learning algorithms use dynamic programming techniques.[43] Reinforcement learning algorithms do not assume knowledge of an exact mathematical model of the MDP, and are used when exact models are infeasible. Reinforcement learning algorithms are used in autonomous vehicles or in learning to play a game against a human opponent.
3
star
24

Cross-Site-Scripting-XSS-

Cross Site Scripting is a major vulnerability that is often exploited by hackers for website hacking. It is one of the more difficult vulnerabilities to deal with because of the way it works. Some of the largest websites in the world have dealt with successful XSS attacks including Microsoft and Google. Most XSS website hacking attacks use malicious Javascript scripts that are embedded in hyperlinks. When the user clicks the link, it might steal personal information, hijack a web session, take over a user account, or change the advertisements that are being displayed on a page. Hackers will often insert these malicious links into web forums, social media websites, and other prominent locations where users will click them. To avoid XSS attacks, website owners must filter user input to remove any malicious code.
3
star
25

Supervised-learning

Supervised learning algorithms build a mathematical model of a set of data that contains both the inputs and the desired outputs.[38] The data is known as training data, and consists of a set of training examples. Each training example has one or more inputs and the desired output, also known as a supervisory signal. In the mathematical model, each training example is represented by an array or vector, sometimes called a feature vector, and the training data is represented by a matrix. Through iterative optimization of an objective function, supervised learning algorithms learn a function that can be used to predict the output associated with new inputs.[39] An optimal function will allow the algorithm to correctly determine the output for inputs that were not a part of the training data. An algorithm that improves the accuracy of its outputs or predictions over time is said to have learned to perform that task.[13] Types of supervised learning algorithms include active learning, classification and regression.[40] Classification algorithms are used when the outputs are restricted to a limited set of values, and regression algorithms are used when the outputs may have any numerical value within a range. As an example, for a classification algorithm that filters emails, the input would be an incoming email, and the output would be the name of the folder in which to file the email. Similarity learning is an area of supervised machine learning closely related to regression and classification, but the goal is to learn from examples using a similarity function that measures how similar or related two objects are. It has applications in ranking, recommendation systems, visual identity tracking, face verification, and speaker verification.
3
star
26

codeflask

Using this tool, you can program graphically with HTML and CSS And program without programming.
HTML
2
star
27

wifi-cracker

With this tool, you can crack Wi-Fi and infiltrate them.
2
star
28

numpy-learning

This project is a tool that helps to remember the Numpy package. In fact, it is an example of all Numpy commands.
Python
2
star
29

c-devops

HTML
2
star
30

history1

In the late 1960s, computer vision began at universities which were pioneering artificial intelligence. It was meant to mimic the human visual system, as a stepping stone to endowing robots with intelligent behavior.[11] In 1966, it was believed that this could be achieved through a summer project, by attaching a camera to a computer and having it "describe what it saw".[12][13] What distinguished computer vision from the prevalent field of digital image processing at that time was a desire to extract three-dimensional structure from images with the goal of achieving full scene understanding. Studies in the 1970s formed the early foundations for many of the computer vision algorithms that exist today, including extraction of edges from images, labeling of lines, non-polyhedral and polyhedral modeling, representation of objects as interconnections of smaller structures, optical flow, and motion estimation.[11] The next decade saw studies based on more rigorous mathematical analysis and quantitative aspects of computer vision. These include the concept of scale-space, the inference of shape from various cues such as shading, texture and focus, and contour models known as snakes. Researchers also realized that many of these mathematical concepts could be treated within the same optimization framework as regularization and Markov random fields.[14] By the 1990s, some of the previous research topics became more active than the others. Research in projective 3-D reconstructions led to better understanding of camera calibration. With the advent of optimization methods for camera calibration, it was realized that a lot of the ideas were already explored in bundle adjustment theory from the field of photogrammetry. This led to methods for sparse 3-D reconstructions of scenes from multiple images. Progress was made on the dense stereo correspondence problem and further multi-view stereo techniques. At the same time, variations of graph cut were used to solve image segmentation. This decade also marked the first time statistical learning techniques were used in practice to recognize faces in images (see Eigenface). Toward the end of the 1990s, a significant change came about with the increased interaction between the fields of computer graphics and computer vision. This included image-based rendering, image morphing, view interpolation, panoramic image stitching and early light-field rendering.[11] Recent work has seen the resurgence of feature-based methods, used in conjunction with machine learning techniques and complex optimization frameworks.[15][16] The advancement of Deep Learning techniques has brought further life to the field of computer vision. The accuracy of deep learning algorithms on several benchmark computer vision data sets for tasks ranging from classification, segmentation and optical flow has surpassed prior methods.[citation needed]
2
star
31

python

HTML
2
star
32

codefoxs

This software is for graphic work and can be used to graphically design a website.
HTML
2
star
33

optmization

The source of the project is optimizing numbers and converting zeros and ones to one
Python
2
star
34

builder-html

framework web development from qitSource - Startup
HTML
2
star
35

vairal

This game is actually a challenge to become popular and famous, which is achieved by using input information and users can earn points in it.
HTML
2
star
36

Distinctions

The fields most closely related to computer vision are image processing, image analysis and machine vision. There is a significant overlap in the range of techniques and applications that these cover. This implies that the basic techniques that are used and developed in these fields are similar, something which can be interpreted as there is only one field with different names. On the other hand, it appears to be necessary for research groups, scientific journals, conferences and companies to present or market themselves as belonging specifically to one of these fields and, hence, various characterizations which distinguish each of the fields from the others have been presented.
2
star
37

tkinter

Python interface to Tcl/Tk Source code: Lib/tkinter/__init__.py The tkinter package (“Tk interface”) is the standard Python interface to the Tk GUI toolkit. Both Tk and tkinter are available on most Unix platforms, as well as on Windows systems. (Tk itself is not part of Python; it is maintained at ActiveState.) Running python -m tkinter from the command line should open a window demonstrating a simple Tk interface, letting you know that tkinter is properly installed on your system, and also showing what version of Tcl/Tk is installed, so you can read the Tcl/Tk documentation specific to that version.
Python
2
star
38

Optimization

Machine learning also has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the pre-assigned labels of a set of examples). The difference between the two fields arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples.[31]
2
star
39

codeboxs

With this tool you can edit your code and remove spaces and malicious characters from it and standardize your code
HTML
2
star
40

New-AI

هوش مصنوعی (به انگلیسی: Artificial Intelligence) که گاهی اوقات هوش ماشینی نامیده می‌شود، به هوشمندی نشان داده شده توسط ماشین‌ها در شرایط مختلف اطلاق می‌شود که در مقابل هوش طبیعی در انسان‌ها قرار دارد. به عبارت دیگر هوش مصنوعی به سامانه‌هایی گفته می‌شود که می‌توانند واکنش‌هایی مشابه رفتارهای هوشمند انسانی از جمله درک شرایط پیچیده، شبیه‌سازی فرایندهای تفکری و شیوه‌های استدلالی انسانی و پاسخ موفق به آنها، یادگیری و توانایی کسب دانش و استدلال برای حل مسایل را داشته باشند.[۲] بیشتر نوشته‌ها و مقاله‌های مربوط به هوش مصنوعی،[۳] آن را به عنوان (دانش شناخت و طراحی عامل‌های هوشمند) تعریف کرده‌اند.[۴][۵] هوش مصنوعی را باید گستره پهناور تلاقی و ملاقات بسیاری از دانش‌ها، علوم، و فنون قدیم و جدید دانست. ریشه‌ها و ایده‌های اصلی آن را باید در فلسفه، زبان‌شناسی، ریاضیات، روان‌شناسی، عصب‌شناسی، فیزیولوژی، تئوری کنترل، احتمالات و بهینه‌سازی جستجو کرد و کاربردهای گوناگون و فراوانی در علوم رایانه، علوم مهندسی، علوم زیست‌شناسی و پزشکی، علوم اجتماعی و بسیاری از علوم دیگر دارد. از زبان‌های برنامه‌نویسی هوش مصنوعی می‌توان به لیسپ، پرولوگ، کلیپس و ویپی اکسپرت اشاره کرد. یک «عامل هوشمند» سامانه‌ای است که با شناخت محیط اطراف خود، شانس موفقیت خود را پس از تحلیل و بررسی افزایش می‌دهد.[۶] جان مکارتی که واژه هوش مصنوعی را در سال ۱۹۵۶ استفاده نمود، آن را «دانش و مهندسی ساخت ماشین‌های هوشمند» تعریف کرده‌است. هوش مصنوعی در علم پزشکی امروزه به دلیل گسترش دانش و پیچیده‌تر شدن فرایند تصمیم‌گیری، استفاده از سامانه‌های اطلاعاتی به خصوص سامانه‌های هوش مصنوعی در تصمیم‌گیری، اهمیت بیشتری یافته‌است. گسترش دانش در حوزهٔ پزشکی و پیچیدگی تصمیمات مرتبط با تشخیص و درمان - به عبارتی حیات انسان - توجه متخصصین را به استفاده از سامانه‌های پشتیبان تصمیم‌گیری در امور پزشکی جلب نموده‌است. به همین دلیل، استفاده از انواع مختلف سامانه‌های هوشمند در پزشکی رو به افزایش است، به گونه‌ای که امروزه تأثیر انواع سامانه‌های هوشمند در پزشکی مورد مطالعه قرار گرفته‌است.
2
star
41

Further-reading

Nils J. Nilsson, Introduction to Machine Learning. Trevor Hastie, Robert Tibshirani and Jerome H. Friedman (2001). The Elements of Statistical Learning, Springer. ISBN 0-387-95284-5. Pedro Domingos (September 2015), The Master Algorithm, Basic Books, ISBN 978-0-465-06570-7 Ian H. Witten and Eibe Frank (2011). Data Mining: Practical machine learning tools and techniques Morgan Kaufmann, 664pp., ISBN 978-0-12-374856-0. Ethem Alpaydin (2004). Introduction to Machine Learning, MIT Press, ISBN 978-0-262-01243-0. David J. C. MacKay. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003. ISBN 0-521-64298-1 Richard O. Duda, Peter E. Hart, David G. Stork (2001) Pattern classification (2nd edition), Wiley, New York, ISBN 0-471-05669-3. Christopher Bishop (1995). Neural Networks for Pattern Recognition, Oxford University Press. ISBN 0-19-853864-2. Stuart Russell & Peter Norvig, (2009). Artificial Intelligence – A Modern Approach. Pearson, ISBN 9789332543515. Ray Solomonoff, An Inductive Inference Machine, IRE Convention Record, Section on Information Theory, Part 2, pp., 56–62, 1957. Ray Solomonoff, An Inductive Inference Machine A privately circulated report from the 1956 Dartmouth Summer Research Conference on A
2
star
42

dexs

Use this framework to properly categorize and configure your files for each project And start your work without the hassle of building and configuring.
HTML
2
star
43

Unsupervised-learning

Unsupervised learning algorithms take a set of data that contains only inputs, and find structure in the data, like grouping or clustering of data points. The algorithms, therefore, learn from test data that has not been labeled, classified or categorized. Instead of responding to feedback, unsupervised learning algorithms identify commonalities in the data and react based on the presence or absence of such commonalities in each new piece of data. A central application of unsupervised learning is in the field of density estimation in statistics, such as finding the probability density function.[41] Though unsupervised learning encompasses other domains involving summarizing and explaining data features. Cluster analysis is the assignment of a set of observations into subsets (called clusters) so that observations within the same cluster are similar according to one or more predesignated criteria, while observations drawn from different clusters are dissimilar. Different clustering techniques make different assumptions on the structure of the data, often defined by some similarity metric and evaluated, for example, by internal compactness, or the similarity between members of the same cluster, and separation, the difference between clusters. Other methods are based on estimated density and graph connectivity.
2
star
44

Risks-of-narrow-AI

Main article: Workplace impact of artificial intelligence Widespread use of artificial intelligence could have unintended consequences that are dangerous or undesirable. Scientists from the Future of Life Institute, among others, described some short-term research goals to see how AI influences the economy, the laws and ethics that are involved with AI and how to minimize AI security risks. In the long-term, the scientists have proposed to continue optimizing function while minimizing possible security risks that come along with new technologies.[249] Some are concerned about algorithmic bias, that AI programs may unintentionally become biased after processing data that exhibits bias.[250] Algorithms already have numerous applications in legal systems. An example of this is COMPAS, a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. ProPublica claims that the average COMPAS-assigned recidivism risk level of black defendants is significantly higher than the average COMPAS-assigned risk level of white defendants.[251]
2
star
45

Autonomous-vehicles

One of the newer application areas is autonomous vehicles, which include submersibles, land-based vehicles (small robots with wheels, cars or trucks), aerial vehicles, and unmanned aerial vehicles (UAV). The level of autonomy ranges from fully autonomous (unmanned) vehicles to vehicles where computer-vision-based systems support a driver or a pilot in various situations. Fully autonomous vehicles typically use computer vision for navigation, e.g. for knowing where it is, or for producing a map of its environment (SLAM) and for detecting obstacles. It can also be used for detecting certain task specific events, e.g., a UAV looking for forest fires. Examples of supporting systems are obstacle warning systems in cars, and systems for autonomous landing of aircraft. Several car manufacturers have demonstrated systems for autonomous driving of cars, but this technology has still not reached a level where it can be put on the market. There are ample examples of military autonomous vehicles ranging from advanced missiles to UAVs for recon missions or missile guidance. Space exploration is already being made with autonomous vehicles using computer vision, e.g., NASA's Curiosity and CNSA's Yutu-2 rover.
2
star
46

Statistics

Machine learning and statistics are closely related fields in terms of methods, but distinct in their principal goal: statistics draws population inferences from a sample, while machine learning finds generalizable predictive patterns.[32] According to Michael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics.[33] He also suggested the term data science as a placeholder to call the overall field.[33] Leo Breiman distinguished two statistical modeling paradigms: data model and algorithmic model,[34] wherein "algorithmic model" means more or less the machine learning algorithms like Random forest. Some statisticians have adopted methods from machine learning, leading to a combined field that they call statistical learning.[35]
2
star
47

Cross-site-request-forgery-CSRF-or-XSRF-

Cross-site request forgery is a common malicious exploit of websites. It occurs when unauthorised commands are transmitted from a user that a web application trusts. The user is usually logged into the website, so they have a higher level of privileges, allowing the hacker to transfer funds, obtain account information or gain access to sensitive information. There are many ways for hackers to transmit forged commands including hidden forms, AJAX, and image tags. The user is not aware that the command has been sent and the website believes that the command has come from an authenticated user. The main difference between an XSS and CSRF attack is that the user must be logged in and trusted by a website for a CSRF wesbite hacking attack to work. Website owners can prevent CSRF attacks by checking HTTP headers to verify where the request is coming from and check CSRF tokens in web forms. These checks will ensure that the request has come from a page inside the web application and not an external source.
2
star
48

Guesspassword

The source of the password occurrence project in the genetic algorithm.
Python
1
star
49

Cancer-diagnosis

Diagnosis of cancer with artificial intelligence
Python
1
star
50

-Import-a-repository.

1
star
51

plist

Password List For Download in Epass
1
star
52

free

1
star
53

crypto

Cryptography
1
star
54

New

New
1
star
55

Symmetrical-colors

Source of synonymous colors
Python
1
star
56

pandas

Tutorials This is a guide to many pandas tutorials, geared mainly for new users. Internal Guides pandas own 10 Minutes to pandas More complex recipes are in the Cookbook pandas Cookbook The goal of this cookbook (by Julia Evans) is to give you some concrete examples for getting started with pandas. These are examples with real-world data, and all the bugs and weirdness that that entails. Here are links to the v0.1 release. For an up-to-date table of contents, see the pandas-cookbook GitHub repository. To run the examples in this tutorial, you’ll need to clone the GitHub repository and get IPython Notebook running. See How to use this cookbook. A quick tour of the IPython Notebook: Shows off IPython’s awesome tab completion and magic functions. Chapter 1: Reading your data into pandas is pretty much the easiest thing. Even when the encoding is wrong! Chapter 2: It’s not totally obvious how to select data from a pandas dataframe. Here we explain the basics (how to take slices and get columns) Chapter 3: Here we get into serious slicing and dicing and learn how to filter dataframes in complicated ways, really fast. Chapter 4: Groupby/aggregate is seriously my favorite thing about pandas and I use it all the time. You should probably read this. Chapter 5: Here you get to find out if it’s cold in Montreal in the winter (spoiler: yes). Web scraping with pandas is fun! Here we combine dataframes. Chapter 6: Strings with pandas are great. It has all these vectorized string operations and they’re the best. We will turn a bunch of strings containing “Snow” into vectors of numbers in a trice. Chapter 7: Cleaning up messy data is never a joy, but with pandas it’s easier. Chapter 8: Parsing Unix timestamps is confusing at first but it turns out to be really easy. Lessons for New pandas Users For more resources, please visit the main repository. 01 - Lesson: - Importing libraries - Creating data sets - Creating data frames - Reading from CSV - Exporting to CSV - Finding maximums - Plotting data 02 - Lesson: - Reading from TXT - Exporting to TXT - Selecting top/bottom records - Descriptive statistics - Grouping/sorting data 03 - Lesson: - Creating functions - Reading from EXCEL - Exporting to EXCEL - Outliers - Lambda functions - Slice and dice data 04 - Lesson: - Adding/deleting columns - Index operations 05 - Lesson: - Stack/Unstack/Transpose functions 06 - Lesson: - GroupBy function 07 - Lesson: - Ways to calculate outliers 08 - Lesson: - Read from Microsoft SQL databases 09 - Lesson: - Export to CSV/EXCEL/TXT 10 - Lesson: - Converting between different kinds of formats 11 - Lesson: - Combining data from various sources Practical data analysis with Python This guide is a comprehensive introduction to the data analysis process using the Python data ecosystem and an interesting open dataset. There are four sections covering selected topics as follows: Munging Data Aggregating Data Visualizing Data Time Series Excel charts with pandas, vincent and xlsxwriter Using Pandas and XlsxWriter to create Excel charts Various Tutorials Wes McKinney’s (pandas BDFL) blog Statistical analysis made easy in Python with SciPy and pandas DataFrames, by Randal Olson Statistical Data Analysis in Python, tutorial videos, by Christopher Fonnesbeck from SciPy 2013 Financial analysis in python, by Thomas Wiecki Intro to pandas data structures, by Greg Reda Pandas and Python: Top 10, by Manish Amde Pandas Tutorial, by Mikhail Semeniuk indexmodules |next |previous |pandas 0.15.2 documentation » © Copyright 2008-2014, the pandas development team
1
star
57

epass-v3

Creator Password List
HTML
1
star
58

images

images
1
star
59

codedoxs-doc

Python Librarys installer
1
star
60

qit

My installer Package
1
star
61

crypto-master

Make encryption easier with this tool
Go
1
star
62

Create-a-new-repository

1
star
63

sturdy-adventure

1
star
64

qitai

HTML
1
star
65

ad-codefoxs

HTML
1
star
66

wizinet

HTML
1
star
67

data

1
star
68

wizibox-data

HTML
1
star
69

wizinit

HTML
1
star
70

wizibox

HTML
1
star
71

over-ai

HTML
1
star
72

software

HTML
1
star
73

AI-move

Application of rail systems using artificial intelligence in industries
1
star
74

config_Wizify

HTML
1
star
75

not

1
star
76

wizinet-news

HTML
1
star
77

ai

Artificial intelligence From Wikipedia, the free encyclopedia Jump to navigationJump to search "AI" redirects here. For other uses, see AI (disambiguation) and Artificial intelligence (disambiguation). Part of a series on Artificial intelligence Major goals[show] Approaches[show] Philosophy[show] History[show] Technology[show] Glossary[show] vte Artificial intelligence (AI), is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals. Leading AI textbooks define the field as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[3] Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving".[4] As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect.[5] A quip in Tesler's Theorem says "AI is whatever hasn't been done yet."[6] For instance, optical character recognition is frequently excluded from things considered to be AI,[7] having become a routine technology.[8] Modern machine capabilities generally classified as AI include successfully understanding human speech,[9] competing at the highest level in strategic game systems (such as chess and Go),[10] autonomously operating cars, intelligent routing in content delivery networks, and military simulations.[11] Artificial intelligence was founded as an academic discipline in 1955, and in the years since has experienced several waves of optimism,[12][13] followed by disappointment and the loss of funding (known as an "AI winter"),[14][15] followed by new approaches, success and renewed funding.[13][16] For most of its history, AI research has been divided into sub-fields that often fail to communicate with each other.[17] These sub-fields are based on technical considerations, such as particular goals (e.g. "robotics" or "machine learning"),[18] the use of particular tools ("logic" or artificial neural networks), or deep philosophical differences.[21][22][23] Sub-fields have also been based on social factors (particular institutions or the work of particular researchers).[17] The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[18] General intelligence is among the field's long-term goals.[24] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics. The AI field draws upon computer science, information engineering, mathematics, psychology, linguistics, philosophy, and many other fields. The field was founded on the assumption that human intelligence "can be so precisely described that a machine can be made to simulate it".[25] This raises philosophical arguments about the mind and the ethics of creating artificial beings endowed with human-like intelligence. These issues have been explored by myth, fiction and philosophy since antiquity.[30] Some people also consider AI to be a danger to humanity if it progresses unabated.[31][32] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[33] In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.[34][16]
1
star
78

Glossary

1
star
79

test

1
star
80

tools-master

Go
1
star
81

help_audio_spacify

1
star
82

Compill

1
star
83

BLHeli-master

1
star
84

Impact

The long-term economic effects of AI are uncertain. A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit, if productivity gains are redistributed.[240] A February 2020 European Union white paper on artificial intelligence advocated for artificial intelligence for economic benefits, including "improving healthcare (e.g. making diagnosis more precise, enabling better prevention of diseases), increasing the efficiency of farming, contributing to climate change mitigation and adaptation, [and] improving the efficiency of production systems through predictive maintenance", while acknowledging potential risks.[193] The relationship between automation and employment is complicated. While automation eliminates old jobs, it also creates new jobs through micro-economic and macro-economic effects.[241] Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; The Economist states that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously".[242] Subjective estimates of the risk vary widely; for example, Michael Osborne and Carl Benedikt Frey estimate 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classifies only 9% of U.S. jobs as "high risk".[243][244][245] Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.[246] Author Martin Ford and others go further and argue that many jobs are routine, repetitive and (to an AI) predictable; Ford warns that these jobs may be automated in the next couple of decades, and that many of the new jobs may not be "accessible to people with average capability", even with retraining. Economists point out that in the past technology has tended to increase rather than reduce total employment, but acknowledge that "we're in uncharted territory" with AI.[33] The potential negative effects of AI and automation were a major issue for Andrew Yang's 2020 presidential campaign in the United States.[247] Irakli Beridze, Head of the Centre for Artificial Intelligence and Robotics at UNICRI, United Nations, has expressed that "I think the dangerous applications for AI, from my point of view, would be criminals or large terrorist organizations using it to disrupt large processes or simply do pure harm. [Terrorists could cause harm] via digital warfare, or it could be a combination of robotics, drones, with AI and other things as well that could be really dangerous. And, of course, other risks come from things like job losses. If we have massive numbers of people losing jobs and don't find a solution, it will be extremely dangerous. Things like lethal autonomous weapons systems should be properly governed — otherwise there's massive potential of misuse."[248]
1
star
85

codedoxs-v2

CodeDoxs Framswork Version 2.0.1
Python
1
star
86

iport-scanner

It is an open source tool and others can help in its development and growth.The goal is to find open pitfalls on the network and is very useful for security engineering
Python
1
star
87

build-master

[mirror] Go's continuous build and release infrastructure (no stability promises)
Go
1
star
88

chess-robot

The source of a robot that can play chess.
Python
1
star
89

Code-Doxs

Framework for configuring your website project.
1
star
90

test-cupp

1
star
91

codedoxs

Framework for configuring your website project.
HTML
1
star
92

free-org

epass-v3
1
star
93

data-Learn

1
star
94

image

1
star
95

hello-world

1
star
96

win

Windows Simulator
HTML
1
star
97

main_repository

Create Main Repository github for save My File
1
star
98

ga

In fact, this tool is a genetic engine that you can use in your projects and this tool is presented as an open source and you can participate in its development and this is a prototype.
HTML
1
star
99

vainmax

d
HTML
1
star
100

sortednumbers

Sorted Numbers Test
Python
1
star